This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-19 20:30
Elapsed1h6m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 901 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 78 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/onsi/gomega v1.20.0
go: downloading k8s.io/apimachinery v0.25.0
go: downloading github.com/blang/semver v3.5.1+incompatible
... skipping 230 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-zpmddx-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-zpmddx-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-zpmddx created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-zpmddx-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-zpmddx-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc, Cluster k8s-upgrade-and-conformance-9w2xo8/k8s-upgrade-and-conformance-zpmddx: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq, Cluster k8s-upgrade-and-conformance-9w2xo8/k8s-upgrade-and-conformance-zpmddx: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-zpmddx-xd8kd-mm8fz, Cluster k8s-upgrade-and-conformance-9w2xo8/k8s-upgrade-and-conformance-zpmddx: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-zpmddx-mp-0, Cluster k8s-upgrade-and-conformance-9w2xo8/k8s-upgrade-and-conformance-zpmddx: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/19/22 20:40:06.557
    INFO: Creating namespace k8s-upgrade-and-conformance-9w2xo8
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-9w2xo8"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep 19 20:49:02.527: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 20:49:02.530: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep 19 20:49:02.546: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep 19 20:49:02.590: INFO: The status of Pod coredns-558bd4d5db-856mz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: The status of Pod coredns-558bd4d5db-kffbl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: The status of Pod kindnet-28vbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: The status of Pod kindnet-w6z65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: The status of Pod kube-proxy-hncm4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: The status of Pod kube-proxy-mvw9q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:02.590: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep 19 20:49:02.590: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 19 20:49:02.590: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 19 20:49:02.590: INFO: coredns-558bd4d5db-856mz  k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:14 +0000 UTC  }]
    Sep 19 20:49:02.590: INFO: coredns-558bd4d5db-kffbl  k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:10 +0000 UTC  }]
    Sep 19 20:49:02.590: INFO: kindnet-28vbx             k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:41:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:41:46 +0000 UTC  }]
    Sep 19 20:49:02.590: INFO: kindnet-w6z65             k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:04 +0000 UTC  }]
    Sep 19 20:49:02.590: INFO: kube-proxy-hncm4          k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:12 +0000 UTC  }]
    Sep 19 20:49:02.590: INFO: kube-proxy-mvw9q          k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:38 +0000 UTC  }]
    Sep 19 20:49:02.591: INFO: 
    Sep 19 20:49:04.613: INFO: The status of Pod coredns-558bd4d5db-856mz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: The status of Pod coredns-558bd4d5db-kffbl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: The status of Pod kindnet-28vbx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: The status of Pod kindnet-w6z65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: The status of Pod kube-proxy-hncm4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: The status of Pod kube-proxy-mvw9q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:04.613: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep 19 20:49:04.613: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 19 20:49:04.613: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 19 20:49:04.613: INFO: coredns-558bd4d5db-856mz  k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:14 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: coredns-558bd4d5db-kffbl  k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:10 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: kindnet-28vbx             k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:41:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:41:46 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: kindnet-w6z65             k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:42:04 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: kube-proxy-hncm4          k8s-upgrade-and-conformance-zpmddx-worker-xgsqux  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:47:12 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: kube-proxy-mvw9q          k8s-upgrade-and-conformance-zpmddx-worker-g0oas4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:48:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:46:38 +0000 UTC  }]
    Sep 19 20:49:04.613: INFO: 
    Sep 19 20:49:06.616: INFO: The status of Pod coredns-558bd4d5db-m2bz6 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:06.616: INFO: The status of Pod coredns-558bd4d5db-xzg6g is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 19 20:49:06.616: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep 19 20:49:06.616: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 19 20:49:06.616: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 19 20:49:06.616: INFO: coredns-558bd4d5db-m2bz6  k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp                Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC  }]
    Sep 19 20:49:06.616: INFO: coredns-558bd4d5db-xzg6g  k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:49:06 +0000 UTC  }]
    Sep 19 20:49:06.616: INFO: 
... skipping 41 lines ...
    STEP: Destroying namespace "services-3006" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:08.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1134" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "crd-webhook-1265" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":1,"skipped":45,"failed":0}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:49:20.772: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:21.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4379" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":2,"skipped":45,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "services-5985" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 340 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-1201" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":2,"skipped":34,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep 19 20:49:33.330: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-2862  ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 2819 3 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 47f8b69e-52ea-435b-987b-69702a3d4f81 0xc004583337 0xc004583338}] []  [{kube-controller-manager Update apps/v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47f8b69e-52ea-435b-987b-69702a3d4f81\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045833b8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep 19 20:49:33.330: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep 19 20:49:33.330: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-2862  c2685032-864b-47df-97a6-a4eec8b06989 2817 3 2022-09-19 20:49:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 47f8b69e-52ea-435b-987b-69702a3d4f81 0xc004583417 0xc004583418}] []  [{kube-controller-manager Update apps/v1 2022-09-19 20:49:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47f8b69e-52ea-435b-987b-69702a3d4f81\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004583488 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep 19 20:49:33.351: INFO: Pod "webserver-deployment-795d758f88-42xgz" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-42xgz webserver-deployment-795d758f88- deployment-2862  b38c4d50-3277-4d15-b67e-4171957c8a30 2806 0 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc004583920 0xc004583921}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6j5xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j5xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-30lpjb,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.5,StartTime:2022-09-19 20:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 19 20:49:33.351: INFO: Pod "webserver-deployment-795d758f88-5g9xf" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-5g9xf webserver-deployment-795d758f88- deployment-2862  069f923f-0410-47ae-9f28-d993bf75e16b 2815 0 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc004583b20 0xc004583b21}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.6\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j9crj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9crj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.6,StartTime:2022-09-19 20:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 19 20:49:33.352: INFO: Pod "webserver-deployment-795d758f88-8dbxj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8dbxj webserver-deployment-795d758f88- deployment-2862  eff0653b-dcbb-469f-a1ff-541a52b383e8 2809 0 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc004583d20 0xc004583d21}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.8\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b26xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b26xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.8,StartTime:2022-09-19 20:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 19 20:49:33.352: INFO: Pod "webserver-deployment-795d758f88-9tjhq" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-9tjhq webserver-deployment-795d758f88- deployment-2862  4f4c4b30-8a98-4319-9182-376793ec9ec6 2839 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc004583f20 0xc004583f21}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tsbk8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsbk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.352: INFO: Pod "webserver-deployment-795d758f88-ccts2" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-ccts2 webserver-deployment-795d758f88- deployment-2862  b3964d48-bd66-440a-b10b-a903db735d87 2812 0 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab61e7 0xc000ab61e8}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mg9f4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mg9f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.9,StartTime:2022-09-19 20:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 19 20:49:33.352: INFO: Pod "webserver-deployment-795d758f88-cgvw5" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-cgvw5 webserver-deployment-795d758f88- deployment-2862  ea6799c1-0fd4-4008-84a9-e80e3ffd86e6 2836 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab65e0 0xc000ab65e1}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-658x4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-658x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.353: INFO: Pod "webserver-deployment-795d758f88-lgnnh" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-lgnnh webserver-deployment-795d758f88- deployment-2862  99f9adac-9115-4c82-92a0-0421156ba88b 2842 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab7007 0xc000ab7008}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5lm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5lm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-30lpjb,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.353: INFO: Pod "webserver-deployment-795d758f88-mnhqn" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-mnhqn webserver-deployment-795d758f88- deployment-2862  0274ea65-cf3d-4ad8-b11b-045d4fc9e40d 2850 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab7210 0xc000ab7211}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-flsnt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-flsnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2022-09-19 20:49:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.353: INFO: Pod "webserver-deployment-795d758f88-sg75j" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-sg75j webserver-deployment-795d758f88- deployment-2862  330db576-5f62-4cd0-afe1-35192bc4dc9b 2847 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab7440 0xc000ab7441}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-crv2w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crv2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.353: INFO: Pod "webserver-deployment-795d758f88-wbsvb" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-wbsvb webserver-deployment-795d758f88- deployment-2862  447d32b8-7a38-4d25-8daf-7ad6594196b8 2844 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab75e0 0xc000ab75e1}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pvpq7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvpq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.354: INFO: Pod "webserver-deployment-795d758f88-zf9lj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-zf9lj webserver-deployment-795d758f88- deployment-2862  39b731d5-b6de-480c-a274-87c4a5e872d2 2848 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab77f0 0xc000ab77f1}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n6gx6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6gx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.355: INFO: Pod "webserver-deployment-795d758f88-zm6qv" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-zm6qv webserver-deployment-795d758f88- deployment-2862  66a78344-1223-4004-b654-77a3f2b374f5 2799 0 2022-09-19 20:49:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea 0xc000ab7b10 0xc000ab7b11}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff075bc9-1581-48b5-9cd3-b5f9cc2d22ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.8\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-knjbm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knjbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.8,StartTime:2022-09-19 20:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 19 20:49:33.356: INFO: Pod "webserver-deployment-847dcfb7fb-2zb4f" is not available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2zb4f webserver-deployment-847dcfb7fb- deployment-2862  2a962e4d-8d9b-4d7a-bf9f-60cf0c5b40ce 2834 0 2022-09-19 20:49:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c2685032-864b-47df-97a6-a4eec8b06989 0xc000ab7e30 0xc000ab7e31}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2685032-864b-47df-97a6-a4eec8b06989\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-57mkx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57mkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.356: INFO: Pod "webserver-deployment-847dcfb7fb-5rl9d" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5rl9d webserver-deployment-847dcfb7fb- deployment-2862  14c1bdb9-f788-4dae-9a75-bdb51da3d36d 2715 0 2022-09-19 20:49:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c2685032-864b-47df-97a6-a4eec8b06989 0xc0008ba0b0 0xc0008ba0b1}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2685032-864b-47df-97a6-a4eec8b06989\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9vz9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9vz9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-30lpjb,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.3,StartTime:2022-09-19 20:49:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-19 20:49:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://c95f990e4e5278f9c308ed36095d0254e09217ee6c635493abc75ad74bc25e66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 19 20:49:33.356: INFO: Pod "webserver-deployment-847dcfb7fb-6ccxx" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6ccxx webserver-deployment-847dcfb7fb- deployment-2862  1f1f32e1-d9c2-428c-9cb4-5efcd1a46a2b 2660 0 2022-09-19 20:49:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb c2685032-864b-47df-97a6-a4eec8b06989 0xc0008ba320 0xc0008ba321}] []  [{kube-controller-manager Update v1 2022-09-19 20:49:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2685032-864b-47df-97a6-a4eec8b06989\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-19 20:49:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z2rb5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2rb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-19 20:49:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.4,StartTime:2022-09-19 20:49:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-19 20:49:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://12d73ae59eaaa84db8750fb85def9f72f6e275e365ed318a0d198eec0d3507f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:33.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":90,"failed":0}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:49:33.453: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
    STEP: Destroying namespace "services-2334" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":4,"skipped":90,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:49:22.261: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 20:49:30.341: INFO: Deleting pod "var-expansion-13d8fe6d-7caa-4ee6-987b-652ab38d2d46" in namespace "var-expansion-6296"
    Sep 19 20:49:30.348: INFO: Wait up to 5m0s for pod "var-expansion-13d8fe6d-7caa-4ee6-987b-652ab38d2d46" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:44.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6296" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:50.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":5,"skipped":94,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:49:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-6730" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:05.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5214" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":7,"skipped":128,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6029" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:13.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-3601" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:50:13.557: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 19 20:50:13.758: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:18.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2244" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":196,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:19.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-3979" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":10,"skipped":208,"failed":0}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:50:19.310: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 20:50:19.358: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b396513c-48cb-40cf-8c7d-0dfe573dc477" in namespace "security-context-test-1560" to be "Succeeded or Failed"

    Sep 19 20:50:19.366: INFO: Pod "busybox-user-65534-b396513c-48cb-40cf-8c7d-0dfe573dc477": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280934ms
    Sep 19 20:50:21.370: INFO: Pod "busybox-user-65534-b396513c-48cb-40cf-8c7d-0dfe573dc477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012447061s
    Sep 19 20:50:21.371: INFO: Pod "busybox-user-65534-b396513c-48cb-40cf-8c7d-0dfe573dc477" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:21.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-1560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":208,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:21.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3055" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":12,"skipped":214,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:21.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":13,"skipped":267,"failed":0}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:50:21.662: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 188 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:30.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7019" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":14,"skipped":267,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:30.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-8343" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":15,"skipped":273,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 64 lines ...
    STEP: Destroying namespace "services-3020" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:46.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-3639" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":296,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:48.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9978" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":17,"skipped":329,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:50:48.184: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-ab3e3ee5-c589-4c66-b0b6-629bc1e12f70
    STEP: Creating a pod to test consume configMaps
    Sep 19 20:50:48.236: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8" in namespace "projected-758" to be "Succeeded or Failed"

    Sep 19 20:50:48.239: INFO: Pod "pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.656477ms
    Sep 19 20:50:50.251: INFO: Pod "pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014798155s
    STEP: Saw pod success
    Sep 19 20:50:50.251: INFO: Pod "pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8" satisfied condition "Succeeded or Failed"

    Sep 19 20:50:50.254: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8 container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 19 20:50:50.277: INFO: Waiting for pod pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8 to disappear
    Sep 19 20:50:50.281: INFO: Pod pod-projected-configmaps-baf94afa-fa48-4270-8c10-64a4941554d8 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:50:50.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-758" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":333,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:00.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-8089" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":19,"skipped":344,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:02.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-5455" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":20,"skipped":351,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:04.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2469" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":21,"skipped":361,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:14.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-5259" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":22,"skipped":386,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:18.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":23,"skipped":390,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:51:18.972: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep 19 20:51:21.025: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:21.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-9255" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":393,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-9904
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9904 to expose endpoints map[]
    Sep 19 20:51:21.099: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep 19 20:51:22.112: INFO: successfully validated that service multi-endpoint-test in namespace services-9904 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-9904
    Sep 19 20:51:22.123: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 19 20:51:24.128: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9904 to expose endpoints map[pod1:[100]]
    Sep 19 20:51:24.144: INFO: successfully validated that service multi-endpoint-test in namespace services-9904 exposes endpoints map[pod1:[100]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-9904" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":25,"skipped":395,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-9460-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":26,"skipped":399,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:30.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":27,"skipped":418,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":6,"skipped":104,"failed":0}

    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:50:34.082: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename watch
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:34.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-2792" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":7,"skipped":104,"failed":0}

    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:51:34.208: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:34.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9328" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":8,"skipped":104,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:51:34.286: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-e6e7ceb0-766e-440b-99a0-fd1f52f8d4dd
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:34.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":9,"skipped":110,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:39.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":10,"skipped":116,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:51:39.366: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-fbc1cb50-669a-4e68-b9b5-be4a989a57b9
    STEP: Creating a pod to test consume configMaps
    Sep 19 20:51:39.412: INFO: Waiting up to 5m0s for pod "pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c" in namespace "configmap-5346" to be "Succeeded or Failed"

    Sep 19 20:51:39.415: INFO: Pod "pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095045ms
    Sep 19 20:51:41.420: INFO: Pod "pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007397636s
    STEP: Saw pod success
    Sep 19 20:51:41.420: INFO: Pod "pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c" satisfied condition "Succeeded or Failed"

    Sep 19 20:51:41.424: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 20:51:41.439: INFO: Waiting for pod pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c to disappear
    Sep 19 20:51:41.443: INFO: Pod pod-configmaps-43e59d20-cc82-40f8-814a-a52a53411a2c no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:41.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5346" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":125,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:51:41.464: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep 19 20:51:41.505: INFO: Waiting up to 5m0s for pod "var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28" in namespace "var-expansion-2391" to be "Succeeded or Failed"

    Sep 19 20:51:41.508: INFO: Pod "var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093267ms
    Sep 19 20:51:43.512: INFO: Pod "var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007465897s
    STEP: Saw pod success
    Sep 19 20:51:43.512: INFO: Pod "var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28" satisfied condition "Succeeded or Failed"

    Sep 19 20:51:43.515: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 20:51:43.531: INFO: Waiting for pod var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28 to disappear
    Sep 19 20:51:43.534: INFO: Pod var-expansion-079d4007-39be-4b12-9faa-3a0336ff1e28 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:51:43.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2391" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":127,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 20:51:49.672: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:51:49.678: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:51:49.678: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:51:54.684: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:51:54.688: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:51:54.689: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:51:59.684: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains '' instead of 'bar.example.com.'
    Sep 19 20:51:59.695: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:51:59.695: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:52:04.684: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:04.689: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:04.689: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:52:09.684: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:09.688: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:09.688: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:52:14.684: INFO: File wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:14.688: INFO: File jessie_udp@dns-test-service-3.dns-644.svc.cluster.local from pod  dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 19 20:52:14.688: INFO: Lookups using dns-644/dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 failed for: [wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local jessie_udp@dns-test-service-3.dns-644.svc.cluster.local]

    
    Sep 19 20:52:19.690: INFO: DNS probes using dns-test-371a94f2-3c59-484b-8ea9-875b9ad61191 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-644.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-644.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:21.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-644" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":13,"skipped":129,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-5463" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":14,"skipped":131,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:24.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9890" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":153,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-2345" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":16,"skipped":176,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:30.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-8482" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":17,"skipped":186,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:32.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-685" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":207,"failed":0}

    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:52:32.845: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename ingress
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:32.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-4560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":19,"skipped":207,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:33.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4097" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":20,"skipped":216,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 20:52:33.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7" in namespace "downward-api-9056" to be "Succeeded or Failed"

    Sep 19 20:52:33.123: INFO: Pod "downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.571972ms
    Sep 19 20:52:35.127: INFO: Pod "downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007361946s
    STEP: Saw pod success
    Sep 19 20:52:35.127: INFO: Pod "downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7" satisfied condition "Succeeded or Failed"

    Sep 19 20:52:35.129: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7 container client-container: <nil>
    STEP: delete the pod
    Sep 19 20:52:35.156: INFO: Waiting for pod downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7 to disappear
    Sep 19 20:52:35.158: INFO: Pod downwardapi-volume-7cea58dc-7188-4be5-8c2e-9a92629f39a7 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:35.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9056" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":233,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:35.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-5164" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":22,"skipped":256,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:36.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6620" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":23,"skipped":269,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:52:36.604: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-2140/secret-test-e1d673d7-9f25-4ff6-b052-f3b7cead50c8
    STEP: Creating a pod to test consume secrets
    Sep 19 20:52:36.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499" in namespace "secrets-2140" to be "Succeeded or Failed"

    Sep 19 20:52:36.652: INFO: Pod "pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033923ms
    Sep 19 20:52:38.656: INFO: Pod "pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007433198s
    STEP: Saw pod success
    Sep 19 20:52:38.656: INFO: Pod "pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499" satisfied condition "Succeeded or Failed"

    Sep 19 20:52:38.660: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499 container env-test: <nil>
    STEP: delete the pod
    Sep 19 20:52:38.676: INFO: Waiting for pod pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499 to disappear
    Sep 19 20:52:38.680: INFO: Pod pod-configmaps-04adb58c-cff9-4993-b36f-dfafafe2e499 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:38.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2140" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":280,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-1505-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":25,"skipped":287,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:52:45.575: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-2fc06773-4e81-486b-8a6a-b00dd69d8366
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:45.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2725" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":26,"skipped":292,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-4913-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":27,"skipped":294,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:52:53.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5289" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":28,"skipped":323,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:01.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-9905" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":28,"skipped":429,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5431" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":29,"skipped":452,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:07.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3622" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":30,"skipped":467,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:09.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-2461" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":31,"skipped":468,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 19 20:52:55.569: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.577: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.594: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.598: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.603: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.608: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:52:55.616: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:00.621: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.625: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.629: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.633: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.647: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.651: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.655: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.659: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:00.668: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:05.624: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.632: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.642: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.647: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.662: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.676: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.680: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:05.688: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:10.621: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.626: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.630: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.635: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.658: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.668: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.673: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.676: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:10.686: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:15.621: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.624: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.628: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.632: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.642: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.645: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.648: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:15.657: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:20.621: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.624: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.628: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.632: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.643: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.648: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.652: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.657: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local from pod dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8: the server could not find the requested resource (get pods dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8)
    Sep 19 20:53:20.666: INFO: Lookups using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4743.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4743.svc.cluster.local jessie_udp@dns-test-service-2.dns-4743.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4743.svc.cluster.local]

    
    Sep 19 20:53:25.659: INFO: DNS probes using dns-4743/dns-test-c6cba1eb-a4f7-44a5-9bd8-107e5ae8fab8 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:25.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-4743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":29,"skipped":324,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: Destroying namespace "services-5949" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":330,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:53:53.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-4266" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":364,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-23-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":32,"skipped":374,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:300.067 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 20:52:56.781: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8110/dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5: the server is currently unable to handle the request (get pods dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5)
    Sep 19 20:54:22.766: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8110/dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8110/pods/dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001601da8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001f7e4e0, 0xc001601da8, 0xc001f7e4e0, 0xc001601da8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 20:54:22.767330      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 20:54:22.766: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8110/dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8110/pods/dns-test-d1584b5f-5fbf-456c-8e4b-76d3db3604f5/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001601da8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001f7e4e0, 0xc001601da8, 0xc001f7e4e0, 0xc001601da8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001601da8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001dfa100, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc0032fd000, 0x77b8c18, 0xc0031e42c0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc0032fd000, 0xc001dfa100, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc002a58140)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc002a58140)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00125fe40, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc003c0c000, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00125fe40, 0x159, 0xc0016017e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00125fe40, 0x159, 0xc0016018d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001601b30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc001f7e400, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001601da8, 0x29a3500, 0x0, 0x0)
... skipping 73 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:23.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:54:23.745: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-7e0b1d83-be3a-4126-8b54-1abb767d8dc9
    STEP: Creating a pod to test consume secrets
    Sep 19 20:54:23.790: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987" in namespace "projected-3475" to be "Succeeded or Failed"

    Sep 19 20:54:23.794: INFO: Pod "pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987": Phase="Pending", Reason="", readiness=false. Elapsed: 3.415034ms
    Sep 19 20:54:25.799: INFO: Pod "pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008095026s
    STEP: Saw pod success
    Sep 19 20:54:25.799: INFO: Pod "pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987" satisfied condition "Succeeded or Failed"

    Sep 19 20:54:25.801: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 20:54:25.824: INFO: Waiting for pod pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987 to disappear
    Sep 19 20:54:25.829: INFO: Pod pod-projected-secrets-17ed6203-2100-41d6-8186-02027750e987 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:25.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3475" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":78,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:25.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5300" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":6,"skipped":142,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 20:54:26.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5" in namespace "projected-6703" to be "Succeeded or Failed"

    Sep 19 20:54:26.053: INFO: Pod "downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.255221ms
    Sep 19 20:54:28.064: INFO: Pod "downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013894353s
    STEP: Saw pod success
    Sep 19 20:54:28.064: INFO: Pod "downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5" satisfied condition "Succeeded or Failed"

    Sep 19 20:54:28.069: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5 container client-container: <nil>
    STEP: delete the pod
    Sep 19 20:54:28.087: INFO: Waiting for pod downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5 to disappear
    Sep 19 20:54:28.092: INFO: Pod downwardapi-volume-8a6caefb-10e9-43d2-aea7-4666b49662a5 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:28.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6703" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":146,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:30.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2967" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":154,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:54:30.208: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 19 20:54:30.254: INFO: Waiting up to 5m0s for pod "pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047" in namespace "emptydir-8879" to be "Succeeded or Failed"

    Sep 19 20:54:30.257: INFO: Pod "pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121502ms
    Sep 19 20:54:32.261: INFO: Pod "pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006988054s
    STEP: Saw pod success
    Sep 19 20:54:32.261: INFO: Pod "pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047" satisfied condition "Succeeded or Failed"

    Sep 19 20:54:32.264: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047 container test-container: <nil>
    STEP: delete the pod
    Sep 19 20:54:32.278: INFO: Waiting for pod pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047 to disappear
    Sep 19 20:54:32.281: INFO: Pod pod-f9f773b9-e57b-4aa3-9285-7f833ad6f047 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:32.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8879" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":158,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 20:54:32.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df" in namespace "downward-api-9335" to be "Succeeded or Failed"

    Sep 19 20:54:32.405: INFO: Pod "downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.27275ms
    Sep 19 20:54:34.409: INFO: Pod "downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007234121s
    STEP: Saw pod success
    Sep 19 20:54:34.409: INFO: Pod "downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df" satisfied condition "Succeeded or Failed"

    Sep 19 20:54:34.412: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df container client-container: <nil>
    STEP: delete the pod
    Sep 19 20:54:34.427: INFO: Waiting for pod downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df to disappear
    Sep 19 20:54:34.430: INFO: Pod downwardapi-volume-bb8af553-fd26-46c0-b1d7-e105c42340df no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:34.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9335" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":205,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:42.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1864" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":483,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-3790
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3790 to expose endpoints map[]
    Sep 19 20:54:42.391: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep 19 20:54:43.399: INFO: successfully validated that service endpoint-test2 in namespace services-3790 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-3790
    Sep 19 20:54:43.410: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 19 20:54:45.413: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 19 20:54:47.415: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3790 to expose endpoints map[pod1:[80]]
... skipping 15 lines ...
    STEP: Destroying namespace "services-3790" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":33,"skipped":487,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-1292-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":34,"skipped":515,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:57.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-8280" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":35,"skipped":582,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:54:58.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9263" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":36,"skipped":586,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:54:58.309: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-25e817d8-f701-4b70-b55f-d01739bf9ebe
    STEP: Creating a pod to test consume secrets
    Sep 19 20:54:58.365: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645" in namespace "projected-5214" to be "Succeeded or Failed"

    Sep 19 20:54:58.373: INFO: Pod "pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645": Phase="Pending", Reason="", readiness=false. Elapsed: 7.770755ms
    Sep 19 20:55:00.385: INFO: Pod "pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019837765s
    Sep 19 20:55:02.396: INFO: Pod "pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030875509s
    STEP: Saw pod success
    Sep 19 20:55:02.396: INFO: Pod "pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645" satisfied condition "Succeeded or Failed"

    Sep 19 20:55:02.401: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 20:55:02.421: INFO: Waiting for pod pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645 to disappear
    Sep 19 20:55:02.425: INFO: Pod pod-projected-secrets-a6e50fdf-20e2-43ce-892d-a264bcc41645 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:55:02.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5214" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":596,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 20:55:02.514: INFO: The status of Pod server-envvars-2d3fea40-1642-469f-a582-5a5000a60277 is Pending, waiting for it to be Running (with Ready = true)
    Sep 19 20:55:04.519: INFO: The status of Pod server-envvars-2d3fea40-1642-469f-a582-5a5000a60277 is Running (Ready = true)
    Sep 19 20:55:04.553: INFO: Waiting up to 5m0s for pod "client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee" in namespace "pods-4615" to be "Succeeded or Failed"

    Sep 19 20:55:04.567: INFO: Pod "client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 13.775638ms
    Sep 19 20:55:06.572: INFO: Pod "client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018844167s
    STEP: Saw pod success
    Sep 19 20:55:06.572: INFO: Pod "client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee" satisfied condition "Succeeded or Failed"

    Sep 19 20:55:06.576: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee container env3cont: <nil>
    STEP: delete the pod
    Sep 19 20:55:06.599: INFO: Waiting for pod client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee to disappear
    Sep 19 20:55:06.604: INFO: Pod client-envvars-a7c7409f-0a2e-434d-831f-ce94d67ad9ee no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:55:06.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4615" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":610,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:55:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-480" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":39,"skipped":621,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:55:17.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8053" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":652,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:55:17.397: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-86279962-5e21-450a-910c-23b3ce14d5e3
    STEP: Creating a pod to test consume secrets
    Sep 19 20:55:17.443: INFO: Waiting up to 5m0s for pod "pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647" in namespace "secrets-5198" to be "Succeeded or Failed"

    Sep 19 20:55:17.446: INFO: Pod "pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740131ms
    Sep 19 20:55:19.451: INFO: Pod "pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007834195s
    STEP: Saw pod success
    Sep 19 20:55:19.451: INFO: Pod "pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647" satisfied condition "Succeeded or Failed"

    Sep 19 20:55:19.454: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 20:55:19.470: INFO: Waiting for pod pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647 to disappear
    Sep 19 20:55:19.473: INFO: Pod pod-secrets-a65b8cb2-5770-4c00-a23f-4604fa5f8647 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:55:19.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5198" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":654,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 141 lines ...
    Sep 19 20:56:06.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4040 exec execpod-affinitys28cj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep 19 20:56:08.496: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
    Sep 19 20:56:08.496: INFO: stdout: ""
    Sep 19 20:56:08.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4040 exec execpod-affinitys28cj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
    Sep 19 20:56:10.683: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
    Sep 19 20:56:10.683: INFO: stdout: ""
    Sep 19 20:56:10.683: FAIL: Unexpected error:

        <*errors.errorString | 0xc00273e080>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [147.936 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 20:56:10.683: Unexpected error:

          <*errors.errorString | 0xc00273e080>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
      occurred
    
... skipping 55 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform canary updates and phased rolling updates of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":11,"skipped":266,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":375,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:56:25.636: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 56 lines ...
    STEP: Destroying namespace "services-8432" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":375,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:57:11.638: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 20:57:11.740: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-795b9e61-c167-4d67-b036-a640cb8a70f9" in namespace "security-context-test-6048" to be "Succeeded or Failed"

    Sep 19 20:57:11.755: INFO: Pod "busybox-privileged-false-795b9e61-c167-4d67-b036-a640cb8a70f9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.803103ms
    Sep 19 20:57:13.762: INFO: Pod "busybox-privileged-false-795b9e61-c167-4d67-b036-a640cb8a70f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021257869s
    Sep 19 20:57:13.762: INFO: Pod "busybox-privileged-false-795b9e61-c167-4d67-b036-a640cb8a70f9" satisfied condition "Succeeded or Failed"

    Sep 19 20:57:13.781: INFO: Got logs for pod "busybox-privileged-false-795b9e61-c167-4d67-b036-a640cb8a70f9": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:57:13.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-6048" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":410,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:57:36.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-7965" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":431,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:57:49.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5106" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":36,"skipped":476,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-5948" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":37,"skipped":524,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-6882-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":38,"skipped":549,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:58:22.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7082" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":577,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:58:25.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7583" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":585,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 19 20:58:27.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.472: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.528: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.533: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.540: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.549: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:27.609: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:32.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.627: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.670: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.681: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.687: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:32.727: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:37.616: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.634: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.686: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.692: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.698: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.703: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:37.743: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:42.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.652: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.718: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.728: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:42.770: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:47.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.634: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.640: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.690: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.708: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:47.759: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:52.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.629: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.682: INFO: Unable to read jessie_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.696: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.702: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:52.741: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_udp@dns-test-service.dns-3714.svc.cluster.local jessie_tcp@dns-test-service.dns-3714.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:58:57.620: INFO: Unable to read wheezy_udp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:57.626: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:57.633: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:57.643: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local from pod dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea: the server could not find the requested resource (get pods dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea)
    Sep 19 20:58:57.773: INFO: Lookups using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea failed for: [wheezy_udp@dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@dns-test-service.dns-3714.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3714.svc.cluster.local]

    
    Sep 19 20:59:02.768: INFO: DNS probes using dns-3714/dns-test-2fe3589d-1e02-45c3-81da-ca2ebdce04ea succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 20:59:02.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-3714" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":41,"skipped":622,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:54:22.805: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 20:57:57.839: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8388/dns-test-970ef072-49d7-4702-aab3-76ea38ba4720: the server is currently unable to handle the request (get pods dns-test-970ef072-49d7-4702-aab3-76ea38ba4720)
    Sep 19 20:59:24.879: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8388/dns-test-970ef072-49d7-4702-aab3-76ea38ba4720: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8388/pods/dns-test-970ef072-49d7-4702-aab3-76ea38ba4720/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000868a50, 0xc0034f9da8, 0xc000868a50, 0xc0034f9da8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 20:59:24.884194      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 20:59:24.880: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8388/dns-test-970ef072-49d7-4702-aab3-76ea38ba4720: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8388/pods/dns-test-970ef072-49d7-4702-aab3-76ea38ba4720/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000868a50, 0xc0034f9da8, 0xc000868a50, 0xc0034f9da8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0034f9da8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001841580, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc00004c800, 0x77b8c18, 0xc003c0b600, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc00004c800, 0xc001841580, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc002f03540)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc002f03540)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00125fe40, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc0028e6000, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00125fe40, 0x159, 0xc0034f97e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00125fe40, 0x159, 0xc0034f98d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc0034f9b30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc000868a00, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)
... skipping 189 lines ...
    Sep 19 20:56:10.279: INFO: ss-1  k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:55:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:55:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:55:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-19 20:55:39 +0000 UTC  }]
    Sep 19 20:56:10.279: INFO: 
    Sep 19 20:56:10.279: INFO: StatefulSet ss has not reached scale 0, at 2
    STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6313
    Sep 19 20:56:11.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:56:11.390: INFO: rc: 1
    Sep 19 20:56:11.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:56:21.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:56:21.510: INFO: rc: 1
    Sep 19 20:56:21.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:56:31.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:56:31.711: INFO: rc: 1
    Sep 19 20:56:31.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:56:41.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:56:41.897: INFO: rc: 1
    Sep 19 20:56:41.897: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:56:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:56:52.105: INFO: rc: 1
    Sep 19 20:56:52.106: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:02.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:02.307: INFO: rc: 1
    Sep 19 20:57:02.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:12.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:12.520: INFO: rc: 1
    Sep 19 20:57:12.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:22.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:22.698: INFO: rc: 1
    Sep 19 20:57:22.698: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:32.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:32.898: INFO: rc: 1
    Sep 19 20:57:32.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:42.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:43.083: INFO: rc: 1
    Sep 19 20:57:43.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:57:53.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:57:53.295: INFO: rc: 1
    Sep 19 20:57:53.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:03.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:03.490: INFO: rc: 1
    Sep 19 20:58:03.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:13.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:13.681: INFO: rc: 1
    Sep 19 20:58:13.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:23.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:23.873: INFO: rc: 1
    Sep 19 20:58:23.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:33.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:34.078: INFO: rc: 1
    Sep 19 20:58:34.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:44.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:44.261: INFO: rc: 1
    Sep 19 20:58:44.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:58:54.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:58:54.436: INFO: rc: 1
    Sep 19 20:58:54.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:04.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:04.639: INFO: rc: 1
    Sep 19 20:59:04.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:14.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:14.832: INFO: rc: 1
    Sep 19 20:59:14.833: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:24.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:25.396: INFO: rc: 1
    Sep 19 20:59:25.397: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:35.581: INFO: rc: 1
    Sep 19 20:59:35.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:45.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:45.799: INFO: rc: 1
    Sep 19 20:59:45.799: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 20:59:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 20:59:55.978: INFO: rc: 1
    Sep 19 20:59:55.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:05.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:06.182: INFO: rc: 1
    Sep 19 21:00:06.182: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:16.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:16.393: INFO: rc: 1
    Sep 19 21:00:16.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:26.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:26.593: INFO: rc: 1
    Sep 19 21:00:26.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:36.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:36.784: INFO: rc: 1
    Sep 19 21:00:36.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:46.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:46.974: INFO: rc: 1
    Sep 19 21:00:46.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:00:56.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:00:57.167: INFO: rc: 1
    Sep 19 21:00:57.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:01:07.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:01:07.347: INFO: rc: 1
    Sep 19 21:01:07.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 19 21:01:17.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6313 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 19 21:01:17.544: INFO: rc: 1
    Sep 19 21:01:17.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
    Sep 19 21:01:17.544: INFO: Scaling statefulset ss to 0
    Sep 19 21:01:17.567: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":42,"skipped":679,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:33.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9260" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":43,"skipped":683,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:152.736 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":630,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:01:33.967: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3" in namespace "projected-3236" to be "Succeeded or Failed"

    Sep 19 21:01:33.973: INFO: Pod "downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598664ms
    Sep 19 21:01:35.980: INFO: Pod "downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013530653s
    STEP: Saw pod success
    Sep 19 21:01:35.981: INFO: Pod "downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3" satisfied condition "Succeeded or Failed"

    Sep 19 21:01:35.985: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:01:36.041: INFO: Waiting for pod downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3 to disappear
    Sep 19 21:01:36.047: INFO: Pod downwardapi-volume-6cb17ec0-a106-4030-b0e3-5afa23b940d3 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:36.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3236" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":697,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 102 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:44.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7851" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":45,"skipped":707,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:51.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4242" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":43,"skipped":639,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:51.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":44,"skipped":646,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:01:51.586: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-e65b0641-5e9d-4afa-8ddc-06013d7d8fd9
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:01:51.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5" in namespace "configmap-300" to be "Succeeded or Failed"

    Sep 19 21:01:51.680: INFO: Pod "pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577523ms
    Sep 19 21:01:53.687: INFO: Pod "pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5": Phase="Running", Reason="", readiness=true. Elapsed: 2.012688381s
    Sep 19 21:01:55.707: INFO: Pod "pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03283986s
    STEP: Saw pod success
    Sep 19 21:01:55.707: INFO: Pod "pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5" satisfied condition "Succeeded or Failed"

    Sep 19 21:01:55.712: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:01:55.754: INFO: Waiting for pod pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5 to disappear
    Sep 19 21:01:55.758: INFO: Pod pod-configmaps-bbd3166f-0192-4808-9669-f644f19727b5 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:55.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-300" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":646,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:01:55.834: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-69f8a803-f63a-418a-bbdd-e3626dcd988b
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:01:55.936: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de" in namespace "projected-56" to be "Succeeded or Failed"

    Sep 19 21:01:55.943: INFO: Pod "pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de": Phase="Pending", Reason="", readiness=false. Elapsed: 7.317504ms
    Sep 19 21:01:57.950: INFO: Pod "pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014739331s
    STEP: Saw pod success
    Sep 19 21:01:57.951: INFO: Pod "pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de" satisfied condition "Succeeded or Failed"

    Sep 19 21:01:57.956: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:01:57.988: INFO: Waiting for pod pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de to disappear
    Sep 19 21:01:57.996: INFO: Pod pod-projected-configmaps-a5799f64-1ced-4f15-a80b-1513def8a6de no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:01:57.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-56" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":662,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    STEP: Destroying namespace "services-3034" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":47,"skipped":677,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:27.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-8827" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":46,"skipped":737,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:28.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-7151" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":47,"skipped":793,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-6pfz
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 19 21:02:12.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6pfz" in namespace "subpath-4367" to be "Succeeded or Failed"

    Sep 19 21:02:12.805: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.798245ms
    Sep 19 21:02:14.812: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 2.014998452s
    Sep 19 21:02:16.819: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 4.022588763s
    Sep 19 21:02:18.827: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 6.029753998s
    Sep 19 21:02:20.843: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 8.046132837s
    Sep 19 21:02:22.849: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 10.051914971s
    Sep 19 21:02:24.860: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 12.06334776s
    Sep 19 21:02:26.870: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 14.073419038s
    Sep 19 21:02:28.878: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 16.081292489s
    Sep 19 21:02:30.886: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 18.08891315s
    Sep 19 21:02:32.893: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Running", Reason="", readiness=true. Elapsed: 20.096135541s
    Sep 19 21:02:34.900: INFO: Pod "pod-subpath-test-configmap-6pfz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.103524708s
    STEP: Saw pod success
    Sep 19 21:02:34.900: INFO: Pod "pod-subpath-test-configmap-6pfz" satisfied condition "Succeeded or Failed"

    Sep 19 21:02:34.913: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-subpath-test-configmap-6pfz container test-container-subpath-configmap-6pfz: <nil>
    STEP: delete the pod
    Sep 19 21:02:34.936: INFO: Waiting for pod pod-subpath-test-configmap-6pfz to disappear
    Sep 19 21:02:34.942: INFO: Pod pod-subpath-test-configmap-6pfz no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-6pfz
    Sep 19 21:02:34.942: INFO: Deleting pod "pod-subpath-test-configmap-6pfz" in namespace "subpath-4367"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:34.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-4367" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":48,"skipped":683,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 282 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-1074/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     32s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    32s   kubelet            Created container webserver
      Normal  Started    31s   kubelet            Started container webserver
    
    Sep 19 20:57:07.587: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=http&host=192.168.2.19&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 20:57:07.587: INFO: ...failed...will try again in next pass

    Sep 19 20:57:07.587: INFO: Breadth first check of 192.168.6.39 on host 172.18.0.5...
    Sep 19 20:57:07.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=http&host=192.168.6.39&port=8080&tries=1'] Namespace:pod-network-test-1074 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 20:57:07.594: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 20:57:07.738: INFO: Waiting for responses: map[]
    Sep 19 20:57:07.738: INFO: reached 192.168.6.39 after 0/1 tries
    Sep 19 20:57:07.738: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  6m3s  default-scheduler  Successfully assigned pod-network-test-1074/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     6m3s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    6m3s  kubelet            Created container webserver
      Normal  Started    6m2s  kubelet            Started container webserver
    
    Sep 19 21:02:38.381: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=http&host=192.168.2.19&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 21:02:38.381: INFO: ... Done probing pod [[[ 192.168.2.19 ]]]
    Sep 19 21:02:38.381: INFO: succeeded at polling 3 out of 4 connections
    Sep 19 21:02:38.381: INFO: pod polling failure summary:
    Sep 19 21:02:38.381: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=http&host=192.168.2.19&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep 19 21:02:38.381: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e54a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 19 21:02:38.381: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep 19 21:02:38.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:50.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-1130" for this suite.
    STEP: Destroying namespace "webhook-1130-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":49,"skipped":686,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-c68r
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 19 21:02:28.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c68r" in namespace "subpath-6656" to be "Succeeded or Failed"

    Sep 19 21:02:28.819: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013912ms
    Sep 19 21:02:30.828: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 2.014365423s
    Sep 19 21:02:32.834: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 4.020973223s
    Sep 19 21:02:34.840: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 6.027020237s
    Sep 19 21:02:36.847: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 8.033440387s
    Sep 19 21:02:38.854: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 10.040857705s
    Sep 19 21:02:40.864: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 12.050538832s
    Sep 19 21:02:42.871: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 14.058116034s
    Sep 19 21:02:44.879: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 16.065595035s
    Sep 19 21:02:46.888: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 18.075157846s
    Sep 19 21:02:48.898: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Running", Reason="", readiness=true. Elapsed: 20.084401425s
    Sep 19 21:02:50.909: INFO: Pod "pod-subpath-test-downwardapi-c68r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.095919063s
    STEP: Saw pod success
    Sep 19 21:02:50.909: INFO: Pod "pod-subpath-test-downwardapi-c68r" satisfied condition "Succeeded or Failed"

    Sep 19 21:02:50.949: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-subpath-test-downwardapi-c68r container test-container-subpath-downwardapi-c68r: <nil>
    STEP: delete the pod
    Sep 19 21:02:51.052: INFO: Waiting for pod pod-subpath-test-downwardapi-c68r to disappear
    Sep 19 21:02:51.065: INFO: Pod pod-subpath-test-downwardapi-c68r no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-c68r
    Sep 19 21:02:51.065: INFO: Deleting pod "pod-subpath-test-downwardapi-c68r" in namespace "subpath-6656"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:51.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6656" for this suite.
    
    •SS
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":48,"skipped":807,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:02:55.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-9941" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":49,"skipped":814,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    Sep 19 21:02:57.797: INFO: The status of Pod pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b is Running (Ready = true)
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep 19 21:02:58.342: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b"
    Sep 19 21:02:58.342: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b" in namespace "pods-6959" to be "terminated due to deadline exceeded"
    Sep 19 21:02:58.350: INFO: Pod "pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b": Phase="Running", Reason="", readiness=true. Elapsed: 7.788987ms
    Sep 19 21:03:00.367: INFO: Pod "pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.025711346s

    Sep 19 21:03:00.368: INFO: Pod "pod-update-activedeadlineseconds-b4e57221-af3a-47fb-b5c0-61bb2cc1290b" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:00.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6959" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":834,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:03:00.398: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-e1ab8895-3540-411c-920a-e0dc453881b8
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:03:00.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a" in namespace "projected-1617" to be "Succeeded or Failed"

    Sep 19 21:03:00.541: INFO: Pod "pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.252888ms
    Sep 19 21:03:02.548: INFO: Pod "pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022007295s
    STEP: Saw pod success
    Sep 19 21:03:02.548: INFO: Pod "pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a" satisfied condition "Succeeded or Failed"

    Sep 19 21:03:02.554: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:03:02.581: INFO: Waiting for pod pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a to disappear
    Sep 19 21:03:02.586: INFO: Pod pod-projected-configmaps-52790fa1-1229-48ea-8a43-b2ff2efdc34a no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:02.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1617" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":838,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:02.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9522" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":52,"skipped":871,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:02.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-2897" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":53,"skipped":884,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:04.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6800" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":54,"skipped":885,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:08.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-2193" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":905,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:21.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":50,"skipped":729,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:03:21.855: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 19 21:03:21.937: INFO: Waiting up to 5m0s for pod "pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45" in namespace "emptydir-9062" to be "Succeeded or Failed"

    Sep 19 21:03:21.945: INFO: Pod "pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.532189ms
    Sep 19 21:03:23.955: INFO: Pod "pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017264112s
    STEP: Saw pod success
    Sep 19 21:03:23.955: INFO: Pod "pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45" satisfied condition "Succeeded or Failed"

    Sep 19 21:03:23.960: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:03:24.011: INFO: Waiting for pod pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45 to disappear
    Sep 19 21:03:24.020: INFO: Pod pod-705ec6c6-bfd3-4151-b1eb-6e5c5e4b0d45 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:24.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9062" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":746,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:24.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-775" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":52,"skipped":798,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:24.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5731" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":53,"skipped":803,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:31.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7422" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":56,"skipped":916,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-622v
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 19 21:03:31.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-622v" in namespace "subpath-5890" to be "Succeeded or Failed"

    Sep 19 21:03:31.558: INFO: Pod "pod-subpath-test-secret-622v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130583ms
    Sep 19 21:03:33.567: INFO: Pod "pod-subpath-test-secret-622v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018643023s
    Sep 19 21:03:35.575: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 4.027034769s
    Sep 19 21:03:37.593: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 6.045101661s
    Sep 19 21:03:39.601: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 8.052703639s
    Sep 19 21:03:41.607: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 10.058633668s
... skipping 2 lines ...
    Sep 19 21:03:47.623: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 16.074267203s
    Sep 19 21:03:49.628: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 18.079492528s
    Sep 19 21:03:51.632: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 20.083739793s
    Sep 19 21:03:53.636: INFO: Pod "pod-subpath-test-secret-622v": Phase="Running", Reason="", readiness=true. Elapsed: 22.088080541s
    Sep 19 21:03:55.642: INFO: Pod "pod-subpath-test-secret-622v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093223747s
    STEP: Saw pod success
    Sep 19 21:03:55.642: INFO: Pod "pod-subpath-test-secret-622v" satisfied condition "Succeeded or Failed"

    Sep 19 21:03:55.645: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-subpath-test-secret-622v container test-container-subpath-secret-622v: <nil>
    STEP: delete the pod
    Sep 19 21:03:55.662: INFO: Waiting for pod pod-subpath-test-secret-622v to disappear
    Sep 19 21:03:55.665: INFO: Pod pod-subpath-test-secret-622v no longer exists
    STEP: Deleting pod pod-subpath-test-secret-622v
    Sep 19 21:03:55.666: INFO: Deleting pod "pod-subpath-test-secret-622v" in namespace "subpath-5890"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:55.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-5890" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":57,"skipped":922,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:03:55.682: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep 19 21:03:55.727: INFO: Waiting up to 5m0s for pod "client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c" in namespace "containers-4021" to be "Succeeded or Failed"

    Sep 19 21:03:55.731: INFO: Pod "client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199726ms
    Sep 19 21:03:57.736: INFO: Pod "client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009248284s
    STEP: Saw pod success
    Sep 19 21:03:57.736: INFO: Pod "client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c" satisfied condition "Succeeded or Failed"

    Sep 19 21:03:57.740: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:03:57.753: INFO: Waiting for pod client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c to disappear
    Sep 19 21:03:57.755: INFO: Pod client-containers-835ef7cf-8f97-472f-b50a-c352e2d73a3c no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:57.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-4021" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":923,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:03:57.782: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-abdf3e59-1ca2-406b-b5fc-1994ccd18695
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:03:57.822: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9" in namespace "projected-8250" to be "Succeeded or Failed"

    Sep 19 21:03:57.829: INFO: Pod "pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.037489ms
    Sep 19 21:03:59.834: INFO: Pod "pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012066748s
    STEP: Saw pod success
    Sep 19 21:03:59.834: INFO: Pod "pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9" satisfied condition "Succeeded or Failed"

    Sep 19 21:03:59.837: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:03:59.857: INFO: Waiting for pod pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9 to disappear
    Sep 19 21:03:59.860: INFO: Pod pod-projected-configmaps-b729f8dd-b93a-4b6a-8c24-5b31fbd55ff9 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:03:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8250" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":935,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 417 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:09.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-7713" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":60,"skipped":940,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:04:09.707: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 19 21:04:09.754: INFO: Waiting up to 5m0s for pod "pod-211071e4-e91f-407b-87ec-738b5ff30432" in namespace "emptydir-5442" to be "Succeeded or Failed"

    Sep 19 21:04:09.757: INFO: Pod "pod-211071e4-e91f-407b-87ec-738b5ff30432": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200967ms
    Sep 19 21:04:11.763: INFO: Pod "pod-211071e4-e91f-407b-87ec-738b5ff30432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009463979s
    Sep 19 21:04:13.769: INFO: Pod "pod-211071e4-e91f-407b-87ec-738b5ff30432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01472637s
    STEP: Saw pod success
    Sep 19 21:04:13.769: INFO: Pod "pod-211071e4-e91f-407b-87ec-738b5ff30432" satisfied condition "Succeeded or Failed"

    Sep 19 21:04:13.774: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-211071e4-e91f-407b-87ec-738b5ff30432 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:04:13.793: INFO: Waiting for pod pod-211071e4-e91f-407b-87ec-738b5ff30432 to disappear
    Sep 19 21:04:13.797: INFO: Pod pod-211071e4-e91f-407b-87ec-738b5ff30432 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:13.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5442" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":970,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":0,"skipped":1,"failed":2,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 20:59:24.945: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 21:03:00.939: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3323/dns-test-d90f64d6-066d-4173-881a-fd36582ced84: the server is currently unable to handle the request (get pods dns-test-d90f64d6-066d-4173-881a-fd36582ced84)
    Sep 19 21:04:27.044: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3323/dns-test-d90f64d6-066d-4173-881a-fd36582ced84: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3323/pods/dns-test-d90f64d6-066d-4173-881a-fd36582ced84/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000127050, 0xc0034f9da8, 0xc000127050, 0xc0034f9da8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 21:04:27.050164      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 21:04:27.049: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3323/dns-test-d90f64d6-066d-4173-881a-fd36582ced84: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-3323/pods/dns-test-d90f64d6-066d-4173-881a-fd36582ced84/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000127050, 0xc0034f9da8, 0xc000127050, 0xc0034f9da8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0034f9da8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003780900, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc00004c400, 0x77b8c18, 0xc001eb22c0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc00004c400, 0xc003780900, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc001bb2500)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc001bb2500)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00125fe40, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc003c6b000, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00125fe40, 0x159, 0xc0034f97e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00125fe40, 0x159, 0xc0034f98d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc0034f9b30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc000127000, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0034f9da8, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:04:27.049: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3323/dns-test-d90f64d6-066d-4173-881a-fd36582ced84: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3323/pods/dns-test-d90f64d6-066d-4173-881a-fd36582ced84/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":0,"skipped":1,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:27.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7455" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:04:27.260: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 19 21:04:27.300: INFO: Waiting up to 5m0s for pod "security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402" in namespace "security-context-6232" to be "Succeeded or Failed"

    Sep 19 21:04:27.305: INFO: Pod "security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402": Phase="Pending", Reason="", readiness=false. Elapsed: 5.193262ms
    Sep 19 21:04:29.311: INFO: Pod "security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010496529s
    STEP: Saw pod success
    Sep 19 21:04:29.311: INFO: Pod "security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402" satisfied condition "Succeeded or Failed"

    Sep 19 21:04:29.314: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:04:29.331: INFO: Waiting for pod security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402 to disappear
    Sep 19 21:04:29.334: INFO: Pod security-context-c6d2f7c6-5a0d-469b-a791-dc59c7051402 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:29.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6232" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:35.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-8657" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":3,"skipped":26,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:04:35.541: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep 19 21:04:35.577: INFO: Waiting up to 5m0s for pod "var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8" in namespace "var-expansion-2299" to be "Succeeded or Failed"

    Sep 19 21:04:35.580: INFO: Pod "var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.707677ms
    Sep 19 21:04:37.585: INFO: Pod "var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007519685s
    STEP: Saw pod success
    Sep 19 21:04:37.585: INFO: Pod "var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8" satisfied condition "Succeeded or Failed"

    Sep 19 21:04:37.591: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:04:37.607: INFO: Waiting for pod var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8 to disappear
    Sep 19 21:04:37.610: INFO: Pod var-expansion-8658f129-daf6-44ae-b6b8-073f0d3c08f8 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:37.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2299" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":4,"skipped":53,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-6950" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":5,"skipped":91,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:04:55.594: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 19 21:04:55.657: INFO: Waiting up to 5m0s for pod "downward-api-f1b90231-9787-49fa-8577-80822444f541" in namespace "downward-api-6835" to be "Succeeded or Failed"

    Sep 19 21:04:55.666: INFO: Pod "downward-api-f1b90231-9787-49fa-8577-80822444f541": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911351ms
    Sep 19 21:04:57.671: INFO: Pod "downward-api-f1b90231-9787-49fa-8577-80822444f541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013814959s
    STEP: Saw pod success
    Sep 19 21:04:57.671: INFO: Pod "downward-api-f1b90231-9787-49fa-8577-80822444f541" satisfied condition "Succeeded or Failed"

    Sep 19 21:04:57.674: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downward-api-f1b90231-9787-49fa-8577-80822444f541 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:04:57.692: INFO: Waiting for pod downward-api-f1b90231-9787-49fa-8577-80822444f541 to disappear
    Sep 19 21:04:57.697: INFO: Pod downward-api-f1b90231-9787-49fa-8577-80822444f541 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6835" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":99,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:04:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-9921" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":131,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:05:06.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":153,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:05:06.503: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-2535/configmap-test-d2b2d91e-6e78-4bb3-98ea-0959fcf644de
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:05:06.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec" in namespace "configmap-2535" to be "Succeeded or Failed"

    Sep 19 21:05:06.546: INFO: Pod "pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226402ms
    Sep 19 21:05:08.558: INFO: Pod "pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014992961s
    STEP: Saw pod success
    Sep 19 21:05:08.558: INFO: Pod "pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec" satisfied condition "Succeeded or Failed"

    Sep 19 21:05:08.562: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec container env-test: <nil>
    STEP: delete the pod
    Sep 19 21:05:08.578: INFO: Waiting for pod pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec to disappear
    Sep 19 21:05:08.581: INFO: Pod pod-configmaps-c0de2609-4e6d-4944-92c4-f69daf386fec no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:05:08.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2535" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":154,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:05:17.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-3293" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":62,"skipped":1020,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:05:19.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1994" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":63,"skipped":1056,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-2974
    STEP: Creating statefulset with conflicting port in namespace statefulset-2974
    STEP: Waiting until pod test-pod will start running in namespace statefulset-2974
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2974
    Sep 19 21:05:21.576: INFO: Observed stateful pod in namespace: statefulset-2974, name: ss-0, uid: f79b4bdc-91e8-47e3-b551-e3f692af2eab, status phase: Pending. Waiting for statefulset controller to delete.
    Sep 19 21:05:22.162: INFO: Observed stateful pod in namespace: statefulset-2974, name: ss-0, uid: f79b4bdc-91e8-47e3-b551-e3f692af2eab, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 19 21:05:22.168: INFO: Observed stateful pod in namespace: statefulset-2974, name: ss-0, uid: f79b4bdc-91e8-47e3-b551-e3f692af2eab, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 19 21:05:22.172: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2974
    STEP: Removing pod with conflicting port in namespace statefulset-2974
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2974 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep 19 21:05:26.199: INFO: Deleting all statefulset in ns statefulset-2974
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:05:46.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-2974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":64,"skipped":1101,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:05:46.256: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 19 21:05:46.285: INFO: PodSpec: initContainers in spec.initContainers
    Sep 19 21:06:29.249: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6d4e8ff0-6f72-4e5f-bbdf-11e7a95465f9", GenerateName:"", Namespace:"init-container-5801", SelfLink:"", UID:"1808d0bc-3273-4e9e-a01c-8b21185b261b", ResourceVersion:"12891", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63799218346, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"285594018"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a31dd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a31de8)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a31e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a31e18)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-qq9fg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000a9ab80), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qq9fg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qq9fg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qq9fg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029fc2b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fc38f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029fc3d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029fc3f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029fc3f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029fc3fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002b51380), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799218346, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799218346, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799218346, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799218346, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"192.168.0.93", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.0.93"}}, StartTime:(*v1.Time)(0xc002a31e48), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fc39d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fc3b20)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://a37cd35e640a4f3bc78d38cbe4953bdd399bce8be6bfbe33647d760f9d88b7ec", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000a9afe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000a9af40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0029fc54f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:06:29.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5801" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":65,"skipped":1105,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:06:35.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5151" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":66,"skipped":1106,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:06:35.405: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 19 21:06:35.453: INFO: Waiting up to 5m0s for pod "downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892" in namespace "downward-api-6909" to be "Succeeded or Failed"

    Sep 19 21:06:35.456: INFO: Pod "downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572076ms
    Sep 19 21:06:37.461: INFO: Pod "downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008357687s
    STEP: Saw pod success
    Sep 19 21:06:37.461: INFO: Pod "downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892" satisfied condition "Succeeded or Failed"

    Sep 19 21:06:37.464: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:06:37.491: INFO: Waiting for pod downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892 to disappear
    Sep 19 21:06:37.494: INFO: Pod downward-api-4604331d-3b6a-4ff3-98d3-e2bb135ba892 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:06:37.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1129,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":277,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:02:38.418: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 278 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  31s   default-scheduler  Successfully assigned pod-network-test-2957/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     30s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    30s   kubelet            Created container webserver
      Normal  Started    30s   kubelet            Started container webserver
    
    Sep 19 21:03:09.787: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.33:9080/dial?request=hostname&protocol=http&host=192.168.2.22&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 21:03:09.787: INFO: ...failed...will try again in next pass

    Sep 19 21:03:09.787: INFO: Breadth first check of 192.168.6.48 on host 172.18.0.5...
    Sep 19 21:03:09.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.33:9080/dial?request=hostname&protocol=http&host=192.168.6.48&port=8080&tries=1'] Namespace:pod-network-test-2957 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:03:09.795: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:03:10.001: INFO: Waiting for responses: map[]
    Sep 19 21:03:10.001: INFO: reached 192.168.6.48 after 0/1 tries
    Sep 19 21:03:10.001: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m59s  default-scheduler  Successfully assigned pod-network-test-2957/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     5m58s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m58s  kubelet            Created container webserver
      Normal  Started    5m58s  kubelet            Started container webserver
    
    Sep 19 21:08:37.442: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.33:9080/dial?request=hostname&protocol=http&host=192.168.2.22&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 21:08:37.442: INFO: ... Done probing pod [[[ 192.168.2.22 ]]]
    Sep 19 21:08:37.442: INFO: succeeded at polling 3 out of 4 connections
    Sep 19 21:08:37.442: INFO: pod polling failure summary:
    Sep 19 21:08:37.442: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.33:9080/dial?request=hostname&protocol=http&host=192.168.2.22&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep 19 21:08:37.442: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e54a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 19 21:08:37.442: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.649 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":172,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:09:13.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":178,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.674 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1138,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:10:40.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-2845" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":69,"skipped":1150,"failed":0}

    
    SSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":54,"skipped":810,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:03:32.343: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
    Sep 19 21:03:44.558: INFO: stderr: ""
    Sep 19 21:03:44.558: INFO: stdout: "true"
    Sep 19 21:03:44.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3301 get pods update-demo-nautilus-bzwsr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 19 21:03:44.685: INFO: stderr: ""
    Sep 19 21:03:44.685: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 19 21:03:44.685: INFO: validating pod update-demo-nautilus-bzwsr
    Sep 19 21:07:18.989: INFO: update-demo-nautilus-bzwsr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-bzwsr)

    Sep 19 21:07:23.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3301 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 19 21:07:24.090: INFO: stderr: ""
    Sep 19 21:07:24.090: INFO: stdout: "update-demo-nautilus-bzwsr update-demo-nautilus-mk5m5 "
    Sep 19 21:07:24.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3301 get pods update-demo-nautilus-bzwsr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 19 21:07:24.184: INFO: stderr: ""
    Sep 19 21:07:24.184: INFO: stdout: "true"
    Sep 19 21:07:24.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3301 get pods update-demo-nautilus-bzwsr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 19 21:07:24.273: INFO: stderr: ""
    Sep 19 21:07:24.273: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 19 21:07:24.273: INFO: validating pod update-demo-nautilus-bzwsr
    Sep 19 21:10:58.121: INFO: update-demo-nautilus-bzwsr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-bzwsr)

    Sep 19 21:11:03.122: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003b5ad80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 28 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 19 21:11:03.122: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":54,"skipped":810,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:03.781: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 130 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:29.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5222" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":55,"skipped":810,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:29.480: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 19 21:11:29.516: INFO: Waiting up to 5m0s for pod "security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b" in namespace "security-context-9964" to be "Succeeded or Failed"

    Sep 19 21:11:29.519: INFO: Pod "security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.789855ms
    Sep 19 21:11:31.523: INFO: Pod "security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006720463s
    STEP: Saw pod success
    Sep 19 21:11:31.523: INFO: Pod "security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:31.526: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:31.549: INFO: Waiting for pod security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b to disappear
    Sep 19 21:11:31.552: INFO: Pod security-context-7fb6d468-0d8c-491b-bd82-2ca2c10acf9b no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:31.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-9964" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":56,"skipped":814,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:31.573: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 19 21:11:31.609: INFO: Waiting up to 5m0s for pod "pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd" in namespace "emptydir-3369" to be "Succeeded or Failed"

    Sep 19 21:11:31.612: INFO: Pod "pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894804ms
    Sep 19 21:11:33.616: INFO: Pod "pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007477316s
    STEP: Saw pod success
    Sep 19 21:11:33.616: INFO: Pod "pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:33.619: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:33.635: INFO: Waiting for pod pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd to disappear
    Sep 19 21:11:33.638: INFO: Pod pod-481e3c9d-a3db-4d76-b1bb-252cbf5e8ffd no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3369" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":822,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4568-crds.webhook.example.com via the AdmissionRegistration API
    Sep 19 21:10:54.423: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:04.534: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:14.640: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:24.734: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:34.745: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:34.745: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002c42a0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:11:34.745: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002c42a0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:35.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-4146" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":58,"skipped":866,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:35.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3562" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":59,"skipped":893,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:35.909: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep 19 21:11:35.953: INFO: Waiting up to 5m0s for pod "pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33" in namespace "emptydir-6628" to be "Succeeded or Failed"

    Sep 19 21:11:35.960: INFO: Pod "pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33": Phase="Pending", Reason="", readiness=false. Elapsed: 7.049622ms
    Sep 19 21:11:37.963: INFO: Pod "pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010602624s
    STEP: Saw pod success
    Sep 19 21:11:37.963: INFO: Pod "pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:37.966: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:37.981: INFO: Waiting for pod pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33 to disappear
    Sep 19 21:11:37.983: INFO: Pod pod-b113bf9a-ca74-4d6b-972b-ba27bc972b33 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:37.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6628" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":894,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:44.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":907,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:45.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-5197" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":919,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:45.314: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-58783c60-465a-4e32-8e63-7d3b45d9e0d0
    STEP: Creating a pod to test consume secrets
    Sep 19 21:11:45.353: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295" in namespace "projected-5860" to be "Succeeded or Failed"

    Sep 19 21:11:45.356: INFO: Pod "pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00261ms
    Sep 19 21:11:47.360: INFO: Pod "pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007275861s
    STEP: Saw pod success
    Sep 19 21:11:47.360: INFO: Pod "pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:47.364: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:11:47.388: INFO: Waiting for pod pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295 to disappear
    Sep 19 21:11:47.390: INFO: Pod pod-projected-secrets-bfe1b212-d982-4e99-b76b-422efaabc295 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:47.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":935,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:47.414: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 19 21:11:47.454: INFO: Waiting up to 5m0s for pod "pod-73cb5756-e4ab-457a-a90e-95534175d8ca" in namespace "emptydir-7646" to be "Succeeded or Failed"

    Sep 19 21:11:47.457: INFO: Pod "pod-73cb5756-e4ab-457a-a90e-95534175d8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948034ms
    Sep 19 21:11:49.462: INFO: Pod "pod-73cb5756-e4ab-457a-a90e-95534175d8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007920565s
    STEP: Saw pod success
    Sep 19 21:11:49.462: INFO: Pod "pod-73cb5756-e4ab-457a-a90e-95534175d8ca" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:49.466: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod pod-73cb5756-e4ab-457a-a90e-95534175d8ca container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:49.483: INFO: Waiting for pod pod-73cb5756-e4ab-457a-a90e-95534175d8ca to disappear
    Sep 19 21:11:49.488: INFO: Pod pod-73cb5756-e4ab-457a-a90e-95534175d8ca no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:49.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7646" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":942,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:09:13.372: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep 19 21:11:13.935: INFO: Successfully updated pod "var-expansion-32f3015a-c430-4a17-aec2-9d4ab1eab4e1"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep 19 21:11:15.944: INFO: Deleting pod "var-expansion-32f3015a-c430-4a17-aec2-9d4ab1eab4e1" in namespace "var-expansion-3962"
    Sep 19 21:11:15.949: INFO: Wait up to 5m0s for pod "var-expansion-32f3015a-c430-4a17-aec2-9d4ab1eab4e1" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:160.595 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":12,"skipped":179,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:54.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-4475" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":968,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:54.429: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 19 21:11:54.468: INFO: Waiting up to 5m0s for pod "downward-api-8bf78f86-d94e-468d-922b-63cce7741b03" in namespace "downward-api-5014" to be "Succeeded or Failed"

    Sep 19 21:11:54.471: INFO: Pod "downward-api-8bf78f86-d94e-468d-922b-63cce7741b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703763ms
    Sep 19 21:11:56.474: INFO: Pod "downward-api-8bf78f86-d94e-468d-922b-63cce7741b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00636216s
    STEP: Saw pod success
    Sep 19 21:11:56.475: INFO: Pod "downward-api-8bf78f86-d94e-468d-922b-63cce7741b03" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:56.477: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downward-api-8bf78f86-d94e-468d-922b-63cce7741b03 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:56.499: INFO: Waiting for pod downward-api-8bf78f86-d94e-468d-922b-63cce7741b03 to disappear
    Sep 19 21:11:56.502: INFO: Pod downward-api-8bf78f86-d94e-468d-922b-63cce7741b03 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5014" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":985,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:56.517: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:11:56.554: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5" in namespace "downward-api-4789" to be "Succeeded or Failed"

    Sep 19 21:11:56.558: INFO: Pod "downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.399813ms
    Sep 19 21:11:58.563: INFO: Pod "downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008148501s
    STEP: Saw pod success
    Sep 19 21:11:58.563: INFO: Pod "downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5" satisfied condition "Succeeded or Failed"

    Sep 19 21:11:58.565: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:11:58.581: INFO: Waiting for pod downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5 to disappear
    Sep 19 21:11:58.584: INFO: Pod downwardapi-volume-c3cb542e-99ad-46d7-bdfd-eebb38f376d5 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:11:58.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4789" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":985,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:00.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-3560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":13,"skipped":180,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:05.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2964" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":68,"skipped":1050,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-3157-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":14,"skipped":211,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:09.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8978" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1066,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:12:08.027: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-97ef4ed2-102a-42e3-9f34-2d4e9091c608
    STEP: Creating a pod to test consume secrets
    Sep 19 21:12:08.136: INFO: Waiting up to 5m0s for pod "pod-secrets-34925960-960b-439b-872b-a15d5fd03830" in namespace "secrets-3277" to be "Succeeded or Failed"

    Sep 19 21:12:08.154: INFO: Pod "pod-secrets-34925960-960b-439b-872b-a15d5fd03830": Phase="Pending", Reason="", readiness=false. Elapsed: 18.315148ms
    Sep 19 21:12:10.160: INFO: Pod "pod-secrets-34925960-960b-439b-872b-a15d5fd03830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023430195s
    STEP: Saw pod success
    Sep 19 21:12:10.160: INFO: Pod "pod-secrets-34925960-960b-439b-872b-a15d5fd03830" satisfied condition "Succeeded or Failed"

    Sep 19 21:12:10.163: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-secrets-34925960-960b-439b-872b-a15d5fd03830 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:12:10.185: INFO: Waiting for pod pod-secrets-34925960-960b-439b-872b-a15d5fd03830 to disappear
    Sep 19 21:12:10.189: INFO: Pod pod-secrets-34925960-960b-439b-872b-a15d5fd03830 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:10.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3277" for this suite.
    STEP: Destroying namespace "secret-namespace-2764" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":229,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:12:10.217: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-3c0ef1f5-2c50-4871-8f8b-14931f466f6d
    STEP: Creating a pod to test consume secrets
    Sep 19 21:12:10.269: INFO: Waiting up to 5m0s for pod "pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a" in namespace "secrets-9484" to be "Succeeded or Failed"

    Sep 19 21:12:10.274: INFO: Pod "pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391486ms
    Sep 19 21:12:12.279: INFO: Pod "pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009416997s
    STEP: Saw pod success
    Sep 19 21:12:12.279: INFO: Pod "pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a" satisfied condition "Succeeded or Failed"

    Sep 19 21:12:12.282: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:12:12.296: INFO: Waiting for pod pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a to disappear
    Sep 19 21:12:12.299: INFO: Pod pod-secrets-dbef9aa7-52b8-4444-8820-245b2e5b303a no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9484" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":233,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-2115-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":70,"skipped":1070,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:12:13.845: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 19 21:12:13.904: INFO: Waiting up to 5m0s for pod "downward-api-9729d537-b48a-417e-b392-e23b4526aaa0" in namespace "downward-api-1842" to be "Succeeded or Failed"

    Sep 19 21:12:13.909: INFO: Pod "downward-api-9729d537-b48a-417e-b392-e23b4526aaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802551ms
    Sep 19 21:12:15.914: INFO: Pod "downward-api-9729d537-b48a-417e-b392-e23b4526aaa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008245024s
    STEP: Saw pod success
    Sep 19 21:12:15.914: INFO: Pod "downward-api-9729d537-b48a-417e-b392-e23b4526aaa0" satisfied condition "Succeeded or Failed"

    Sep 19 21:12:15.917: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downward-api-9729d537-b48a-417e-b392-e23b4526aaa0 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:12:15.933: INFO: Waiting for pod downward-api-9729d537-b48a-417e-b392-e23b4526aaa0 to disappear
    Sep 19 21:12:15.936: INFO: Pod downward-api-9729d537-b48a-417e-b392-e23b4526aaa0 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:15.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1842" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1124,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":69,"skipped":1157,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:11:35.337: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7688-crds.webhook.example.com via the AdmissionRegistration API
    Sep 19 21:11:49.541: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:11:59.665: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:09.755: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:19.853: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:29.863: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:29.863: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002c42a0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:12:29.863: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002c42a0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 50 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:52.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-8800" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":17,"skipped":241,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:12:52.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9" in namespace "projected-9925" to be "Succeeded or Failed"

    Sep 19 21:12:52.907: INFO: Pod "downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187529ms
    Sep 19 21:12:54.911: INFO: Pod "downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010148673s
    STEP: Saw pod success
    Sep 19 21:12:54.912: INFO: Pod "downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9" satisfied condition "Succeeded or Failed"

    Sep 19 21:12:54.915: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:12:54.930: INFO: Waiting for pod downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9 to disappear
    Sep 19 21:12:54.933: INFO: Pod downwardapi-volume-f0a5c336-3adc-4edb-a747-372c4fc47fe9 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:12:54.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9925" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":250,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 19 21:12:16.445: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 19 21:12:19.468: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    Sep 19 21:12:29.487: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:39.599: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:49.703: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:59.800: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:09.812: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:09.813: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    
    • Failure [53.944 seconds]
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should unconditionally reject operations on fail closed webhook [Conformance] [It]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:13:09.813: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 2 lines ...
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:12:54.987: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 21:12:57.029: INFO: Deleting pod "var-expansion-712ba181-b47c-40cf-ad3d-829109a4bd15" in namespace "var-expansion-8133"
    Sep 19 21:12:57.034: INFO: Wait up to 5m0s for pod "var-expansion-712ba181-b47c-40cf-ad3d-829109a4bd15" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:11.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8133" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":19,"skipped":281,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":71,"skipped":1125,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:13:09.897: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 19 21:13:10.805: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 19 21:13:13.831: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:13.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-7960" for this suite.
    STEP: Destroying namespace "webhook-7960-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":72,"skipped":1125,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:14.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-8639" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":73,"skipped":1135,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:13:14.111: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 19 21:13:14.155: INFO: Waiting up to 5m0s for pod "pod-19bdf9e5-1454-4196-94b5-014e35699b66" in namespace "emptydir-9163" to be "Succeeded or Failed"

    Sep 19 21:13:14.160: INFO: Pod "pod-19bdf9e5-1454-4196-94b5-014e35699b66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261228ms
    Sep 19 21:13:16.163: INFO: Pod "pod-19bdf9e5-1454-4196-94b5-014e35699b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006356427s
    STEP: Saw pod success
    Sep 19 21:13:16.163: INFO: Pod "pod-19bdf9e5-1454-4196-94b5-014e35699b66" satisfied condition "Succeeded or Failed"

    Sep 19 21:13:16.165: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-19bdf9e5-1454-4196-94b5-014e35699b66 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:13:16.181: INFO: Waiting for pod pod-19bdf9e5-1454-4196-94b5-014e35699b66 to disappear
    Sep 19 21:13:16.184: INFO: Pod pod-19bdf9e5-1454-4196-94b5-014e35699b66 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:16.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9163" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1138,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:22.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6527" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":20,"skipped":320,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":69,"skipped":1157,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:12:30.461: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5128-crds.webhook.example.com via the AdmissionRegistration API
    Sep 19 21:12:44.823: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:12:54.935: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:05.045: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:15.136: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:25.147: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:13:25.147: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002c42a0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:13:25.147: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002c42a0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":69,"skipped":1157,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:29.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-5414" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":21,"skipped":377,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:13:30.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5531" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":22,"skipped":380,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 260 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep 19 21:14:20.271: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:14:20.271: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":74,"skipped":1140,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:14:20.300: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:14:29.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-7063" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":75,"skipped":1140,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:14:29.430: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep 19 21:14:29.474: INFO: Waiting up to 5m0s for pod "pod-58d24922-311a-4de1-a23b-4447396e819f" in namespace "emptydir-6083" to be "Succeeded or Failed"

    Sep 19 21:14:29.477: INFO: Pod "pod-58d24922-311a-4de1-a23b-4447396e819f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043734ms
    Sep 19 21:14:31.481: INFO: Pod "pod-58d24922-311a-4de1-a23b-4447396e819f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007545856s
    STEP: Saw pod success
    Sep 19 21:14:31.481: INFO: Pod "pod-58d24922-311a-4de1-a23b-4447396e819f" satisfied condition "Succeeded or Failed"

    Sep 19 21:14:31.484: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-58d24922-311a-4de1-a23b-4447396e819f container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:14:31.504: INFO: Waiting for pod pod-58d24922-311a-4de1-a23b-4447396e819f to disappear
    Sep 19 21:14:31.507: INFO: Pod pod-58d24922-311a-4de1-a23b-4447396e819f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:14:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6083" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1154,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:14:31.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96" in namespace "downward-api-4950" to be "Succeeded or Failed"

    Sep 19 21:14:31.580: INFO: Pod "downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96": Phase="Pending", Reason="", readiness=false. Elapsed: 3.901304ms
    Sep 19 21:14:33.584: INFO: Pod "downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0078815s
    STEP: Saw pod success
    Sep 19 21:14:33.584: INFO: Pod "downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96" satisfied condition "Succeeded or Failed"

    Sep 19 21:14:33.586: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:14:33.603: INFO: Waiting for pod downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96 to disappear
    Sep 19 21:14:33.605: INFO: Pod downwardapi-volume-40bd8b80-d919-4cee-a2fb-a16a83391b96 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:14:33.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4950" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1164,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":277,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:08:37.458: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 278 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  30s   default-scheduler  Successfully assigned pod-network-test-7067/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep 19 21:09:07.369: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.95:9080/dial?request=hostname&protocol=http&host=192.168.2.28&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 21:09:07.369: INFO: ...failed...will try again in next pass

    Sep 19 21:09:07.369: INFO: Breadth first check of 192.168.6.55 on host 172.18.0.5...
    Sep 19 21:09:07.373: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.95:9080/dial?request=hostname&protocol=http&host=192.168.6.55&port=8080&tries=1'] Namespace:pod-network-test-7067 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:09:07.373: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:09:07.456: INFO: Waiting for responses: map[]
    Sep 19 21:09:07.456: INFO: reached 192.168.6.55 after 0/1 tries
    Sep 19 21:09:07.456: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m57s  default-scheduler  Successfully assigned pod-network-test-7067/netserver-3 to k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp
      Normal  Pulled     5m56s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m56s  kubelet            Created container webserver
      Normal  Started    5m56s  kubelet            Started container webserver
    
    Sep 19 21:14:34.327: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.95:9080/dial?request=hostname&protocol=http&host=192.168.2.28&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep 19 21:14:34.327: INFO: ... Done probing pod [[[ 192.168.2.28 ]]]
    Sep 19 21:14:34.327: INFO: succeeded at polling 3 out of 4 connections
    Sep 19 21:14:34.327: INFO: pod polling failure summary:
    Sep 19 21:14:34.327: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.95:9080/dial?request=hostname&protocol=http&host=192.168.2.28&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep 19 21:14:34.327: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e54a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 19 21:14:34.327: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":277,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:14:33.618: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep 19 21:14:33.656: INFO: Waiting up to 5m0s for pod "var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d" in namespace "var-expansion-367" to be "Succeeded or Failed"

    Sep 19 21:14:33.659: INFO: Pod "var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33551ms
    Sep 19 21:14:35.662: INFO: Pod "var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006279473s
    STEP: Saw pod success
    Sep 19 21:14:35.663: INFO: Pod "var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d" satisfied condition "Succeeded or Failed"

    Sep 19 21:14:35.666: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:14:35.680: INFO: Waiting for pod var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d to disappear
    Sep 19 21:14:35.683: INFO: Pod var-expansion-a9d125ba-900e-4394-bda3-e315caed8b0d no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:14:35.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-367" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1165,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":388,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:13:42.282: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename cronjob
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:15:00.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-8155" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":24,"skipped":388,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:15:07.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-9677" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":25,"skipped":390,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep 19 21:14:49.529: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:14:59.640: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:15:09.744: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:15:19.843: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:15:29.854: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:15:29.854: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:15:29.854: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 15 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 19 21:15:11.238: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert from CR v1 to CR v2 [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 21:15:11.243: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:15:23.848: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-2049-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-12.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 19 21:15:33.954: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-2049-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-12.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 19 21:15:44.054: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-2049-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-12.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 19 21:15:54.157: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-2049-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-12.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 19 21:16:04.162: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-2049-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-12.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 19 21:16:04.162: FAIL: Unexpected error:

        <*errors.errorString | 0xc000240280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.233 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert from CR v1 to CR v2 [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:16:04.162: Unexpected error:

          <*errors.errorString | 0xc000240280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":25,"skipped":407,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:04.748: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "crd-webhook-467" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":26,"skipped":407,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:11.756: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep 19 21:16:11.821: INFO: Waiting up to 5m0s for pod "client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6" in namespace "containers-2874" to be "Succeeded or Failed"

    Sep 19 21:16:11.826: INFO: Pod "client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.435698ms
    Sep 19 21:16:13.831: INFO: Pod "client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010502725s
    STEP: Saw pod success
    Sep 19 21:16:13.832: INFO: Pod "client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:13.834: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:16:13.853: INFO: Waiting for pod client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6 to disappear
    Sep 19 21:16:13.856: INFO: Pod client-containers-8b5bd1b8-1289-422b-840b-d84ec5478fb6 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:13.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2874" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":407,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":78,"skipped":1194,"failed":5,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:15:29.933: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep 19 21:15:44.019: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:15:54.129: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:04.232: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:14.330: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:24.340: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:24.341: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:16:24.341: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-xmhm
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 19 21:16:13.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xmhm" in namespace "subpath-2138" to be "Succeeded or Failed"

    Sep 19 21:16:13.940: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611554ms
    Sep 19 21:16:15.944: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 2.006702735s
    Sep 19 21:16:17.951: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 4.013396399s
    Sep 19 21:16:19.958: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 6.020156428s
    Sep 19 21:16:21.962: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 8.02463395s
    Sep 19 21:16:23.967: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 10.029947621s
    Sep 19 21:16:25.972: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 12.034620666s
    Sep 19 21:16:27.976: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 14.038971377s
    Sep 19 21:16:29.981: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 16.043558167s
    Sep 19 21:16:31.986: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 18.048734479s
    Sep 19 21:16:33.992: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Running", Reason="", readiness=true. Elapsed: 20.054273216s
    Sep 19 21:16:35.996: INFO: Pod "pod-subpath-test-projected-xmhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.058391924s
    STEP: Saw pod success
    Sep 19 21:16:35.996: INFO: Pod "pod-subpath-test-projected-xmhm" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:35.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-subpath-test-projected-xmhm container test-container-subpath-projected-xmhm: <nil>
    STEP: delete the pod
    Sep 19 21:16:36.015: INFO: Waiting for pod pod-subpath-test-projected-xmhm to disappear
    Sep 19 21:16:36.018: INFO: Pod pod-subpath-test-projected-xmhm no longer exists
    STEP: Deleting pod pod-subpath-test-projected-xmhm
    Sep 19 21:16:36.018: INFO: Deleting pod "pod-subpath-test-projected-xmhm" in namespace "subpath-2138"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:36.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-2138" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":422,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:16:36.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b" in namespace "projected-562" to be "Succeeded or Failed"

    Sep 19 21:16:36.199: INFO: Pod "downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55784ms
    Sep 19 21:16:38.204: INFO: Pod "downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007530358s
    STEP: Saw pod success
    Sep 19 21:16:38.204: INFO: Pod "downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:38.207: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:16:38.225: INFO: Waiting for pod downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b to disappear
    Sep 19 21:16:38.227: INFO: Pod downwardapi-volume-30418645-6021-431a-8bbc-2063255d6a9b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:38.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-562" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":498,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:40.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3866" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":504,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:16:40.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784" in namespace "projected-4032" to be "Succeeded or Failed"

    Sep 19 21:16:40.392: INFO: Pod "downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784": Phase="Pending", Reason="", readiness=false. Elapsed: 3.175918ms
    Sep 19 21:16:42.398: INFO: Pod "downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008756824s
    STEP: Saw pod success
    Sep 19 21:16:42.398: INFO: Pod "downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:42.401: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:16:42.429: INFO: Waiting for pod downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784 to disappear
    Sep 19 21:16:42.433: INFO: Pod downwardapi-volume-5b2d6d8c-9b65-404e-bbb5-9c6735f85784 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:42.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":518,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:42.444: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:16:42.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399" in namespace "projected-6933" to be "Succeeded or Failed"

    Sep 19 21:16:42.484: INFO: Pod "downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399": Phase="Pending", Reason="", readiness=false. Elapsed: 3.562619ms
    Sep 19 21:16:44.488: INFO: Pod "downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007903301s
    STEP: Saw pod success
    Sep 19 21:16:44.488: INFO: Pod "downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:44.491: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:16:44.505: INFO: Waiting for pod downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399 to disappear
    Sep 19 21:16:44.508: INFO: Pod downwardapi-volume-fbe889df-e90a-45ec-8b42-7e916a5fa399 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:44.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6933" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":518,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 66 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform rolling updates and roll backs of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":12,"skipped":278,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:16:44.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd" in namespace "downward-api-9289" to be "Succeeded or Failed"

    Sep 19 21:16:44.595: INFO: Pod "downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966036ms
    Sep 19 21:16:46.600: INFO: Pod "downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007638179s
    STEP: Saw pod success
    Sep 19 21:16:46.600: INFO: Pod "downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:46.603: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:16:46.626: INFO: Waiting for pod downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd to disappear
    Sep 19 21:16:46.629: INFO: Pod downwardapi-volume-190d5b81-96c8-4165-a579-3437deec20fd no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:46.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9289" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":535,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:45.421: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-a6ca3bad-4f2f-4dc6-902d-7273d8d0f1af
    STEP: Creating a pod to test consume secrets
    Sep 19 21:16:45.462: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381" in namespace "projected-3276" to be "Succeeded or Failed"

    Sep 19 21:16:45.465: INFO: Pod "pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349071ms
    Sep 19 21:16:47.469: INFO: Pod "pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006521236s
    STEP: Saw pod success
    Sep 19 21:16:47.469: INFO: Pod "pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:47.472: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:16:47.491: INFO: Waiting for pod pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381 to disappear
    Sep 19 21:16:47.495: INFO: Pod pod-projected-secrets-6a5e3dd3-5373-43a6-905a-9760191b9381 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3276" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":306,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:46.654: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-95fbe809-9e96-4a78-b150-1b3f9b21751f
    STEP: Creating a pod to test consume secrets
    Sep 19 21:16:46.697: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42" in namespace "projected-3839" to be "Succeeded or Failed"

    Sep 19 21:16:46.701: INFO: Pod "pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489604ms
    Sep 19 21:16:48.706: INFO: Pod "pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008665844s
    STEP: Saw pod success
    Sep 19 21:16:48.706: INFO: Pod "pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42" satisfied condition "Succeeded or Failed"

    Sep 19 21:16:48.710: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:16:48.729: INFO: Waiting for pod pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42 to disappear
    Sep 19 21:16:48.734: INFO: Pod pod-projected-secrets-ba0419f3-44c1-4518-bc46-4b7f582dcb42 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:16:48.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3839" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":545,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":78,"skipped":1194,"failed":6,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:16:24.394: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep 19 21:16:38.223: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:48.334: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:16:58.437: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:17:08.536: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:17:18.545: INFO: Waiting for webhook configuration to be ready...
    Sep 19 21:17:18.545: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:17:18.545: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":78,"skipped":1194,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:24.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9145" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":79,"skipped":1212,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:243.139 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1199,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:17:28.942: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 19 21:17:28.980: INFO: Waiting up to 5m0s for pod "pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d" in namespace "emptydir-4240" to be "Succeeded or Failed"

    Sep 19 21:17:28.983: INFO: Pod "pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300247ms
    Sep 19 21:17:30.987: INFO: Pod "pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006635242s
    STEP: Saw pod success
    Sep 19 21:17:30.987: INFO: Pod "pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d" satisfied condition "Succeeded or Failed"

    Sep 19 21:17:30.991: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-30lpjb pod pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:17:31.017: INFO: Waiting for pod pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d to disappear
    Sep 19 21:17:31.020: INFO: Pod pod-66bfe96d-1c96-4ad1-98ad-f97bb316e44d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:31.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1211,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 66 lines ...
    STEP: Destroying namespace "services-7617" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":312,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:31.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":15,"skipped":321,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:38.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-2305" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":72,"skipped":1221,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:41.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-5043" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":16,"skipped":322,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:17:42.009: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-aa62dc78-42fc-4a8b-8a8a-238d1aa7a81f
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:17:42.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28" in namespace "configmap-7152" to be "Succeeded or Failed"

    Sep 19 21:17:42.059: INFO: Pod "pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612676ms
    Sep 19 21:17:44.063: INFO: Pod "pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006945818s
    STEP: Saw pod success
    Sep 19 21:17:44.063: INFO: Pod "pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28" satisfied condition "Succeeded or Failed"

    Sep 19 21:17:44.066: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:17:44.084: INFO: Waiting for pod pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28 to disappear
    Sep 19 21:17:44.087: INFO: Pod pod-configmaps-af5569f7-b496-4ff7-a405-8fd35e694c28 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:44.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7152" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":339,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5705" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":80,"skipped":1221,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:59.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2168" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":18,"skipped":398,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:17:59.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7285" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":19,"skipped":406,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:00.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-4780" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":35,"skipped":557,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:17:54.768: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:00.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-3363" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":81,"skipped":1230,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:18:00.827: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 19 21:18:00.867: INFO: Waiting up to 5m0s for pod "pod-18695a7b-f840-496c-88af-ce3d5c1c467d" in namespace "emptydir-8937" to be "Succeeded or Failed"

    Sep 19 21:18:00.870: INFO: Pod "pod-18695a7b-f840-496c-88af-ce3d5c1c467d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.009884ms
    Sep 19 21:18:02.874: INFO: Pod "pod-18695a7b-f840-496c-88af-ce3d5c1c467d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007411284s
    STEP: Saw pod success
    Sep 19 21:18:02.874: INFO: Pod "pod-18695a7b-f840-496c-88af-ce3d5c1c467d" satisfied condition "Succeeded or Failed"

    Sep 19 21:18:02.877: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-18695a7b-f840-496c-88af-ce3d5c1c467d container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:18:02.894: INFO: Waiting for pod pod-18695a7b-f840-496c-88af-ce3d5c1c467d to disappear
    Sep 19 21:18:02.897: INFO: Pod pod-18695a7b-f840-496c-88af-ce3d5c1c467d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:02.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8937" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1233,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:04.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6331" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":20,"skipped":412,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 56 lines ...
    STEP: Destroying namespace "services-4408" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":73,"skipped":1228,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:10.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9058" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":83,"skipped":1305,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:11.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1105" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":21,"skipped":434,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-9f0fb22a-9edf-4755-83f5-d12351b576db
    STEP: Creating secret with name secret-projected-all-test-volume-98661217-f15a-4530-ae0f-ee4b38d3124b
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep 19 21:18:10.872: INFO: Waiting up to 5m0s for pod "projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2" in namespace "projected-3911" to be "Succeeded or Failed"

    Sep 19 21:18:10.884: INFO: Pod "projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.385582ms
    Sep 19 21:18:12.888: INFO: Pod "projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015949218s
    STEP: Saw pod success
    Sep 19 21:18:12.888: INFO: Pod "projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2" satisfied condition "Succeeded or Failed"

    Sep 19 21:18:12.892: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2 container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:18:12.913: INFO: Waiting for pod projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2 to disappear
    Sep 19 21:18:12.915: INFO: Pod projected-volume-2dd45ce0-bce7-4b50-af47-fa50df901fe2 no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:12.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3911" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1342,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:13.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9562" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":74,"skipped":1257,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:18:11.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0" in namespace "downward-api-9276" to be "Succeeded or Failed"

    Sep 19 21:18:11.262: INFO: Pod "downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00761ms
    Sep 19 21:18:13.267: INFO: Pod "downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009170617s
    STEP: Saw pod success
    Sep 19 21:18:13.267: INFO: Pod "downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0" satisfied condition "Succeeded or Failed"

    Sep 19 21:18:13.270: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:18:13.292: INFO: Waiting for pod downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0 to disappear
    Sep 19 21:18:13.295: INFO: Pod downwardapi-volume-b7be75b2-790f-48cd-941e-7e2532b862b0 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:13.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9276" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":459,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:15.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8773" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":85,"skipped":1367,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-8355-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":86,"skipped":1379,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:24.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3887" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":23,"skipped":464,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:18:24.532: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-8095dbd3-5cd1-41f2-8e5e-d276fec73850
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:18:24.573: INFO: Waiting up to 5m0s for pod "pod-configmaps-d335561e-307a-438d-b319-295563e22e40" in namespace "configmap-8087" to be "Succeeded or Failed"

    Sep 19 21:18:24.576: INFO: Pod "pod-configmaps-d335561e-307a-438d-b319-295563e22e40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833164ms
    Sep 19 21:18:26.581: INFO: Pod "pod-configmaps-d335561e-307a-438d-b319-295563e22e40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007339701s
    STEP: Saw pod success
    Sep 19 21:18:26.581: INFO: Pod "pod-configmaps-d335561e-307a-438d-b319-295563e22e40" satisfied condition "Succeeded or Failed"

    Sep 19 21:18:26.584: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-d335561e-307a-438d-b319-295563e22e40 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:18:26.603: INFO: Waiting for pod pod-configmaps-d335561e-307a-438d-b319-295563e22e40 to disappear
    Sep 19 21:18:26.606: INFO: Pod pod-configmaps-d335561e-307a-438d-b319-295563e22e40 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:26.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8087" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":465,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:34.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8684" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":87,"skipped":1388,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:39.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5491" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":88,"skipped":1503,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:18:41.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-7157" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":89,"skipped":1518,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:01.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-1629" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":75,"skipped":1274,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:19:01.385: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 19 21:19:01.430: INFO: Waiting up to 5m0s for pod "pod-6e2615cf-1978-4705-8979-8980cc2d87e8" in namespace "emptydir-7448" to be "Succeeded or Failed"

    Sep 19 21:19:01.433: INFO: Pod "pod-6e2615cf-1978-4705-8979-8980cc2d87e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.295721ms
    Sep 19 21:19:03.438: INFO: Pod "pod-6e2615cf-1978-4705-8979-8980cc2d87e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007848557s
    STEP: Saw pod success
    Sep 19 21:19:03.438: INFO: Pod "pod-6e2615cf-1978-4705-8979-8980cc2d87e8" satisfied condition "Succeeded or Failed"

    Sep 19 21:19:03.441: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-6e2615cf-1978-4705-8979-8980cc2d87e8 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:19:03.458: INFO: Waiting for pod pod-6e2615cf-1978-4705-8979-8980cc2d87e8 to disappear
    Sep 19 21:19:03.462: INFO: Pod pod-6e2615cf-1978-4705-8979-8980cc2d87e8 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:03.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7448" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1283,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:03.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-4688" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":77,"skipped":1304,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:21.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-4578" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1350,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:19:21.773: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep 19 21:19:21.815: INFO: Waiting up to 5m0s for pod "var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9" in namespace "var-expansion-2320" to be "Succeeded or Failed"

    Sep 19 21:19:21.819: INFO: Pod "var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.981599ms
    Sep 19 21:19:23.824: INFO: Pod "var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008695984s
    STEP: Saw pod success
    Sep 19 21:19:23.824: INFO: Pod "var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9" satisfied condition "Succeeded or Failed"

    Sep 19 21:19:23.827: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:19:23.843: INFO: Waiting for pod var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9 to disappear
    Sep 19 21:19:23.846: INFO: Pod var-expansion-d8ccd392-847c-4cb1-8ddd-1b73bf3c4de9 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:23.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2320" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1380,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:19:23.855: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename custom-resource-definition
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:30.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7243" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":90,"skipped":1535,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:42.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-8636" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":91,"skipped":1536,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:45.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7821" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":468,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":80,"skipped":1380,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:19:24.436: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:46.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-4776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1380,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:49.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-3924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":82,"skipped":1385,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:49.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7763" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":83,"skipped":1386,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:19:49.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836" in namespace "projected-2810" to be "Succeeded or Failed"

    Sep 19 21:19:49.767: INFO: Pod "downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666625ms
    Sep 19 21:19:51.772: INFO: Pod "downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008616692s
    STEP: Saw pod success
    Sep 19 21:19:51.772: INFO: Pod "downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836" satisfied condition "Succeeded or Failed"

    Sep 19 21:19:51.776: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:19:51.810: INFO: Waiting for pod downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836 to disappear
    Sep 19 21:19:51.814: INFO: Pod downwardapi-volume-2dd6aec6-5f66-41fe-a8f4-591831b7f836 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:51.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2810" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1406,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:19:52.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a" in namespace "projected-596" to be "Succeeded or Failed"

    Sep 19 21:19:52.076: INFO: Pod "downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.452619ms
    Sep 19 21:19:54.081: INFO: Pod "downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008509248s
    STEP: Saw pod success
    Sep 19 21:19:54.081: INFO: Pod "downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a" satisfied condition "Succeeded or Failed"

    Sep 19 21:19:54.084: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:19:54.098: INFO: Waiting for pod downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a to disappear
    Sep 19 21:19:54.101: INFO: Pod downwardapi-volume-564765e0-eaec-4cb4-a756-9a77a7bd337a no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:56.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-594" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":92,"skipped":1545,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":85,"skipped":1507,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:19:54.114: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-2494-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":86,"skipped":1507,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:19:59.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5777" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":93,"skipped":1556,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:19:59.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017" in namespace "downward-api-8905" to be "Succeeded or Failed"

    Sep 19 21:19:59.827: INFO: Pod "downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.304915ms
    Sep 19 21:20:01.833: INFO: Pod "downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02826043s
    STEP: Saw pod success
    Sep 19 21:20:01.833: INFO: Pod "downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:01.841: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:20:01.861: INFO: Waiting for pod downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017 to disappear
    Sep 19 21:20:01.865: INFO: Pod downwardapi-volume-2bf5f4bc-6c37-4558-8779-91f436535017 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:01.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8905" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":94,"skipped":1590,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:09.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6241" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":95,"skipped":1596,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:09.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-3205" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":96,"skipped":1600,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep 19 21:19:53.198: INFO: Unable to read jessie_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.205: INFO: Unable to read jessie_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.210: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:53.240: INFO: Lookups using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6962 wheezy_tcp@dns-test-service.dns-6962 wheezy_udp@dns-test-service.dns-6962.svc wheezy_tcp@dns-test-service.dns-6962.svc wheezy_udp@_http._tcp.dns-test-service.dns-6962.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6962.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6962 jessie_tcp@dns-test-service.dns-6962 jessie_udp@dns-test-service.dns-6962.svc jessie_tcp@dns-test-service.dns-6962.svc jessie_udp@_http._tcp.dns-test-service.dns-6962.svc jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc]

    
    Sep 19 21:19:58.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.255: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.259: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
... skipping 5 lines ...
    Sep 19 21:19:58.306: INFO: Unable to read jessie_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.309: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.312: INFO: Unable to read jessie_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.315: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.318: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.322: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:19:58.342: INFO: Lookups using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6962 wheezy_tcp@dns-test-service.dns-6962 wheezy_udp@dns-test-service.dns-6962.svc wheezy_tcp@dns-test-service.dns-6962.svc wheezy_udp@_http._tcp.dns-test-service.dns-6962.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6962.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6962 jessie_tcp@dns-test-service.dns-6962 jessie_udp@dns-test-service.dns-6962.svc jessie_tcp@dns-test-service.dns-6962.svc jessie_udp@_http._tcp.dns-test-service.dns-6962.svc jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc]

    
    Sep 19 21:20:03.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.261: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
... skipping 5 lines ...
    Sep 19 21:20:03.330: INFO: Unable to read jessie_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.339: INFO: Unable to read jessie_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.354: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:03.378: INFO: Lookups using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6962 wheezy_tcp@dns-test-service.dns-6962 wheezy_udp@dns-test-service.dns-6962.svc wheezy_tcp@dns-test-service.dns-6962.svc wheezy_udp@_http._tcp.dns-test-service.dns-6962.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6962.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6962 jessie_tcp@dns-test-service.dns-6962 jessie_udp@dns-test-service.dns-6962.svc jessie_tcp@dns-test-service.dns-6962.svc jessie_udp@_http._tcp.dns-test-service.dns-6962.svc jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc]

    
    Sep 19 21:20:08.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.257: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.266: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
... skipping 5 lines ...
    Sep 19 21:20:08.326: INFO: Unable to read jessie_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.331: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.336: INFO: Unable to read jessie_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.339: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.344: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.349: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:08.378: INFO: Lookups using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6962 wheezy_tcp@dns-test-service.dns-6962 wheezy_udp@dns-test-service.dns-6962.svc wheezy_tcp@dns-test-service.dns-6962.svc wheezy_udp@_http._tcp.dns-test-service.dns-6962.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6962.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6962 jessie_tcp@dns-test-service.dns-6962 jessie_udp@dns-test-service.dns-6962.svc jessie_tcp@dns-test-service.dns-6962.svc jessie_udp@_http._tcp.dns-test-service.dns-6962.svc jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc]

    
    Sep 19 21:20:13.248: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.257: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.266: INFO: Unable to read wheezy_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
... skipping 5 lines ...
    Sep 19 21:20:13.338: INFO: Unable to read jessie_udp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962 from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.350: INFO: Unable to read jessie_udp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.355: INFO: Unable to read jessie_tcp@dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.359: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.373: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc from pod dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15: the server could not find the requested resource (get pods dns-test-0d14411e-d38b-479b-b115-4703a93efe15)
    Sep 19 21:20:13.400: INFO: Lookups using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6962 wheezy_tcp@dns-test-service.dns-6962 wheezy_udp@dns-test-service.dns-6962.svc wheezy_tcp@dns-test-service.dns-6962.svc wheezy_udp@_http._tcp.dns-test-service.dns-6962.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6962.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6962 jessie_tcp@dns-test-service.dns-6962 jessie_udp@dns-test-service.dns-6962.svc jessie_tcp@dns-test-service.dns-6962.svc jessie_udp@_http._tcp.dns-test-service.dns-6962.svc jessie_tcp@_http._tcp.dns-test-service.dns-6962.svc]

    
    Sep 19 21:20:18.426: INFO: DNS probes using dns-6962/dns-test-0d14411e-d38b-479b-b115-4703a93efe15 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:18.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-6962" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":485,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:19.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6233" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":87,"skipped":1532,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:25.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9348" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":27,"skipped":537,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:25.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":88,"skipped":1536,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:29.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-8540" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":28,"skipped":544,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:29.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5539" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":89,"skipped":1556,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:30.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9901" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":97,"skipped":1612,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:29.127: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-f3757de1-1954-4769-a1d0-f99741b6ba93
    STEP: Creating a pod to test consume secrets
    Sep 19 21:20:29.169: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f" in namespace "projected-4640" to be "Succeeded or Failed"

    Sep 19 21:20:29.173: INFO: Pod "pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227216ms
    Sep 19 21:20:31.177: INFO: Pod "pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008041408s
    STEP: Saw pod success
    Sep 19 21:20:31.177: INFO: Pod "pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:31.183: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:20:31.207: INFO: Waiting for pod pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f to disappear
    Sep 19 21:20:31.211: INFO: Pod pod-projected-secrets-9ca5291d-d32c-4372-8d09-8e232510774f no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 9 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-4009/configmap-test-7c5e693e-feb8-487a-a3fe-43dfcd28ef27
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:20:29.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5" in namespace "configmap-4009" to be "Succeeded or Failed"

    Sep 19 21:20:29.216: INFO: Pod "pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14927ms
    Sep 19 21:20:31.219: INFO: Pod "pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006480181s
    STEP: Saw pod success
    Sep 19 21:20:31.219: INFO: Pod "pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:31.226: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5 container env-test: <nil>
    STEP: delete the pod
    Sep 19 21:20:31.249: INFO: Waiting for pod pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5 to disappear
    Sep 19 21:20:31.255: INFO: Pod pod-configmaps-2d2e16ca-f51e-42b1-bf34-6b44d91d9be5 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:31.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4009" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1568,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":588,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:31.224: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:31.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":30,"skipped":588,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:31.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5565" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":91,"skipped":1576,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:38.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3374" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":98,"skipped":1614,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:38.748: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 21:20:38.782: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5cbdbf73-bb89-47f6-b386-37ac7c61a4a8" in namespace "security-context-test-7227" to be "Succeeded or Failed"

    Sep 19 21:20:38.786: INFO: Pod "busybox-readonly-false-5cbdbf73-bb89-47f6-b386-37ac7c61a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234591ms
    Sep 19 21:20:40.792: INFO: Pod "busybox-readonly-false-5cbdbf73-bb89-47f6-b386-37ac7c61a4a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009089841s
    Sep 19 21:20:40.792: INFO: Pod "busybox-readonly-false-5cbdbf73-bb89-47f6-b386-37ac7c61a4a8" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:40.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-7227" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":99,"skipped":1619,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:41.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9328" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":653,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:41.436: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-3303c53e-d52e-4c1c-b379-b41ac016ed20
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:20:41.503: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3" in namespace "projected-8409" to be "Succeeded or Failed"

    Sep 19 21:20:41.514: INFO: Pod "pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.463757ms
    Sep 19 21:20:43.519: INFO: Pod "pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015422943s
    STEP: Saw pod success
    Sep 19 21:20:43.519: INFO: Pod "pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:43.523: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:20:43.547: INFO: Waiting for pod pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3 to disappear
    Sep 19 21:20:43.552: INFO: Pod pod-projected-configmaps-313a6b38-ed55-421b-9989-f2045d1773c3 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:43.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":684,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:45.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1678" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1584,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:45.572: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-a9e140c2-4ebe-4227-81f8-ce1b2123633e
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:20:45.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986" in namespace "configmap-4208" to be "Succeeded or Failed"

    Sep 19 21:20:45.631: INFO: Pod "pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986": Phase="Pending", Reason="", readiness=false. Elapsed: 5.287583ms
    Sep 19 21:20:47.636: INFO: Pod "pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009919242s
    STEP: Saw pod success
    Sep 19 21:20:47.636: INFO: Pod "pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:47.639: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:20:47.656: INFO: Waiting for pod pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986 to disappear
    Sep 19 21:20:47.659: INFO: Pod pod-configmaps-379e8e68-e7fc-4d02-8d09-5f21bf888986 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:47.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4208" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":93,"skipped":1602,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:47.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-2750" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":100,"skipped":1638,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:48.003: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 19 21:20:48.038: INFO: Waiting up to 5m0s for pod "pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db" in namespace "emptydir-550" to be "Succeeded or Failed"

    Sep 19 21:20:48.042: INFO: Pod "pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479447ms
    Sep 19 21:20:50.047: INFO: Pod "pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008411209s
    STEP: Saw pod success
    Sep 19 21:20:50.047: INFO: Pod "pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:50.050: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:20:50.072: INFO: Waiting for pod pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db to disappear
    Sep 19 21:20:50.077: INFO: Pod pod-ec87d270-efc8-4f1c-8ffe-a2980db3f8db no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:50.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-550" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":101,"skipped":1644,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:20:50.097: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 19 21:20:50.156: INFO: Waiting up to 5m0s for pod "pod-14e18e69-9626-41f1-a15a-f827e4257e36" in namespace "emptydir-638" to be "Succeeded or Failed"

    Sep 19 21:20:50.160: INFO: Pod "pod-14e18e69-9626-41f1-a15a-f827e4257e36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160818ms
    Sep 19 21:20:52.165: INFO: Pod "pod-14e18e69-9626-41f1-a15a-f827e4257e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008038675s
    STEP: Saw pod success
    Sep 19 21:20:52.165: INFO: Pod "pod-14e18e69-9626-41f1-a15a-f827e4257e36" satisfied condition "Succeeded or Failed"

    Sep 19 21:20:52.170: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-14e18e69-9626-41f1-a15a-f827e4257e36 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:20:52.190: INFO: Waiting for pod pod-14e18e69-9626-41f1-a15a-f827e4257e36 to disappear
    Sep 19 21:20:52.193: INFO: Pod pod-14e18e69-9626-41f1-a15a-f827e4257e36 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:20:52.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-638" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":102,"skipped":1648,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:13.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7534" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":103,"skipped":1697,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:21:13.615: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-d7f4323d-f3bb-4d50-b98b-e477e36e5b83
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:21:13.663: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17" in namespace "projected-5333" to be "Succeeded or Failed"

    Sep 19 21:21:13.669: INFO: Pod "pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.68793ms
    Sep 19 21:21:15.673: INFO: Pod "pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009073632s
    STEP: Saw pod success
    Sep 19 21:21:15.673: INFO: Pod "pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17" satisfied condition "Succeeded or Failed"

    Sep 19 21:21:15.677: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:21:15.694: INFO: Waiting for pod pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17 to disappear
    Sep 19 21:21:15.697: INFO: Pod pod-projected-configmaps-2ccaf6a0-9818-44ad-be0b-bb023ffd9b17 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:15.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5333" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":104,"skipped":1743,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:21:15.720: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep 19 21:21:15.768: INFO: Waiting up to 5m0s for pod "client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6" in namespace "containers-2958" to be "Succeeded or Failed"

    Sep 19 21:21:15.771: INFO: Pod "client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913266ms
    Sep 19 21:21:17.775: INFO: Pod "client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006975366s
    STEP: Saw pod success
    Sep 19 21:21:17.775: INFO: Pod "client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6" satisfied condition "Succeeded or Failed"

    Sep 19 21:21:17.777: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:21:17.790: INFO: Waiting for pod client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6 to disappear
    Sep 19 21:21:17.793: INFO: Pod client-containers-5490f6cc-90f4-4860-b2ac-56023526fcc6 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:17.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2958" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":105,"skipped":1751,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:19.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-2796" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":106,"skipped":1788,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:21:19.934: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-dccdfe3b-7585-4b8d-85ed-c6b7d4578807
    STEP: Creating a pod to test consume secrets
    Sep 19 21:21:19.987: INFO: Waiting up to 5m0s for pod "pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0" in namespace "secrets-1386" to be "Succeeded or Failed"

    Sep 19 21:21:19.993: INFO: Pod "pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.282418ms
    Sep 19 21:21:21.999: INFO: Pod "pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011113209s
    STEP: Saw pod success
    Sep 19 21:21:21.999: INFO: Pod "pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0" satisfied condition "Succeeded or Failed"

    Sep 19 21:21:22.002: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:21:22.026: INFO: Waiting for pod pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0 to disappear
    Sep 19 21:21:22.030: INFO: Pod pod-secrets-07ff4c0f-e914-48cb-982e-e04f2243b6a0 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:22.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1386" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":107,"skipped":1794,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 70 lines ...
    STEP: Destroying namespace "services-1072" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":108,"skipped":1808,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:49.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":109,"skipped":1835,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:55.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2218" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":110,"skipped":1836,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:55.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3087" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":111,"skipped":1840,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:21:55.837: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep 19 21:21:55.870: INFO: Waiting up to 5m0s for pod "test-pod-c440918c-8986-49db-9645-523f08169d12" in namespace "svcaccounts-899" to be "Succeeded or Failed"

    Sep 19 21:21:55.873: INFO: Pod "test-pod-c440918c-8986-49db-9645-523f08169d12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799534ms
    Sep 19 21:21:57.877: INFO: Pod "test-pod-c440918c-8986-49db-9645-523f08169d12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006971878s
    STEP: Saw pod success
    Sep 19 21:21:57.878: INFO: Pod "test-pod-c440918c-8986-49db-9645-523f08169d12" satisfied condition "Succeeded or Failed"

    Sep 19 21:21:57.880: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod test-pod-c440918c-8986-49db-9645-523f08169d12 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:21:57.899: INFO: Waiting for pod test-pod-c440918c-8986-49db-9645-523f08169d12 to disappear
    Sep 19 21:21:57.902: INFO: Pod test-pod-c440918c-8986-49db-9645-523f08169d12 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:21:57.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-899" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":112,"skipped":1844,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:21:57.976: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-e7c1979e-f078-4e2c-b46c-dbb78b9f469a
    STEP: Creating a pod to test consume secrets
    Sep 19 21:21:58.019: INFO: Waiting up to 5m0s for pod "pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51" in namespace "secrets-2955" to be "Succeeded or Failed"

    Sep 19 21:21:58.023: INFO: Pod "pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205672ms
    Sep 19 21:22:00.030: INFO: Pod "pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01082912s
    STEP: Saw pod success
    Sep 19 21:22:00.030: INFO: Pod "pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51" satisfied condition "Succeeded or Failed"

    Sep 19 21:22:00.034: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:22:00.057: INFO: Waiting for pod pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51 to disappear
    Sep 19 21:22:00.061: INFO: Pod pod-secrets-a3343d13-32d0-43e3-9314-aaab3b0cba51 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:00.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2955" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":113,"skipped":1875,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4095" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":114,"skipped":1901,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:22:02.310: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-d817c12d-0896-4e0d-a0f3-c1225f68219c
    STEP: Creating a pod to test consume secrets
    Sep 19 21:22:02.354: INFO: Waiting up to 5m0s for pod "pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d" in namespace "secrets-999" to be "Succeeded or Failed"

    Sep 19 21:22:02.358: INFO: Pod "pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.286724ms
    Sep 19 21:22:04.362: INFO: Pod "pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00755542s
    STEP: Saw pod success
    Sep 19 21:22:04.362: INFO: Pod "pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d" satisfied condition "Succeeded or Failed"

    Sep 19 21:22:04.365: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d container secret-env-test: <nil>
    STEP: delete the pod
    Sep 19 21:22:04.385: INFO: Waiting for pod pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d to disappear
    Sep 19 21:22:04.388: INFO: Pod pod-secrets-c5a1a488-1f38-49d1-aaa7-036c1ead982d no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:04.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-999" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":115,"skipped":1912,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:04.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-1083" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":116,"skipped":1925,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:04.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":117,"skipped":1937,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:04.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-7696" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":118,"skipped":1940,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:22:04.670: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename deployment
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:09.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2936" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":119,"skipped":1940,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:22:09.814: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-82082bd9-1075-4aa8-b14d-38ef111944b3
    STEP: Creating a pod to test consume secrets
    Sep 19 21:22:09.859: INFO: Waiting up to 5m0s for pod "pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f" in namespace "secrets-2458" to be "Succeeded or Failed"

    Sep 19 21:22:09.864: INFO: Pod "pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44713ms
    Sep 19 21:22:11.868: INFO: Pod "pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008902529s
    STEP: Saw pod success
    Sep 19 21:22:11.868: INFO: Pod "pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f" satisfied condition "Succeeded or Failed"

    Sep 19 21:22:11.872: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:22:11.887: INFO: Waiting for pod pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f to disappear
    Sep 19 21:22:11.891: INFO: Pod pod-secrets-d7a9a019-0681-4046-9501-3c01f415845f no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:22:11.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2458" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":120,"skipped":1950,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 21:21:37.101: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-3917.svc.cluster.local from pod dns-3917/dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4: the server is currently unable to handle the request (get pods dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4)
    Sep 19 21:23:02.856: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3917/dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3917/pods/dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00377dd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004636d38, 0xc00377dd68, 0xc004636d38, 0xc00377dd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 21:23:02.857470      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 21:23:02.856: Unable to read wheezy_hosts@dns-querier-1 from pod dns-3917/dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-3917/pods/dns-test-f4a86187-12ae-4e82-9f6b-92556a3182d4/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00377dd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004636d38, 0xc00377dd68, 0xc004636d38, 0xc00377dd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc00377dd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc000a08400, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc001a25400, 0x77b8c18, 0xc001bfc6e0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc001a25400, 0xc000a08400, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc0011d6800)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc0011d6800)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0032e0500, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc000ff8000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0032e0500, 0x12f, 0xc00377d7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0032e0500, 0x12f, 0xc00377d890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc00377daf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc004636d00, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00377dd68, 0x29a3500, 0x0, 0x0)
... skipping 77 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:04.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1439" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":121,"skipped":1989,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:04.157: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-19737834-5a28-45dc-ad9f-46e136f4202f
    STEP: Creating a pod to test consume secrets
    Sep 19 21:23:04.200: INFO: Waiting up to 5m0s for pod "pod-secrets-ee847932-4629-4998-b668-90246070b2c7" in namespace "secrets-8290" to be "Succeeded or Failed"

    Sep 19 21:23:04.204: INFO: Pod "pod-secrets-ee847932-4629-4998-b668-90246070b2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239947ms
    Sep 19 21:23:06.213: INFO: Pod "pod-secrets-ee847932-4629-4998-b668-90246070b2c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012797125s
    STEP: Saw pod success
    Sep 19 21:23:06.213: INFO: Pod "pod-secrets-ee847932-4629-4998-b668-90246070b2c7" satisfied condition "Succeeded or Failed"

    Sep 19 21:23:06.217: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-secrets-ee847932-4629-4998-b668-90246070b2c7 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:23:06.237: INFO: Waiting for pod pod-secrets-ee847932-4629-4998-b668-90246070b2c7 to disappear
    Sep 19 21:23:06.241: INFO: Pod pod-secrets-ee847932-4629-4998-b668-90246070b2c7 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:06.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8290" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":122,"skipped":1990,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 133 lines ...
    Sep 19 21:22:49.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-67 exec execpod-affinitygrnz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 19 21:22:52.123: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n"
    Sep 19 21:22:52.123: INFO: stdout: ""
    Sep 19 21:22:52.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-67 exec execpod-affinitygrnz4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 19 21:22:54.311: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n"
    Sep 19 21:22:54.311: INFO: stdout: ""
    Sep 19 21:22:54.312: FAIL: Unexpected error:

        <*errors.errorString | 0xc0046540e0>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [145.253 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:22:54.312: Unexpected error:

          <*errors.errorString | 0xc0046540e0>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:28.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1671" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":123,"skipped":2009,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":702,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:08.850: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
    STEP: Destroying namespace "services-3991" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":702,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:31.505: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-e7f37ead-d6be-477c-bac1-99aaa894959c
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:23:31.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5" in namespace "configmap-5748" to be "Succeeded or Failed"

    Sep 19 21:23:31.575: INFO: Pod "pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.298379ms
    Sep 19 21:23:33.578: INFO: Pod "pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010636392s
    STEP: Saw pod success
    Sep 19 21:23:33.578: INFO: Pod "pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5" satisfied condition "Succeeded or Failed"

    Sep 19 21:23:33.581: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:23:33.593: INFO: Waiting for pod pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5 to disappear
    Sep 19 21:23:33.596: INFO: Pod pod-configmaps-4adc12fe-2ca8-48ad-a310-2f7a3cf107b5 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:33.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":721,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:35.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9537" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":743,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-7697-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":36,"skipped":773,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:49.486: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:23:55.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3820" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":773,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:28.482: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 21:23:28.523: INFO: created pod
    Sep 19 21:23:28.523: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-9844" to be "Succeeded or Failed"

    Sep 19 21:23:28.526: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015786ms
    Sep 19 21:23:30.531: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008077657s
    STEP: Saw pod success
    Sep 19 21:23:30.531: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep 19 21:24:00.532: INFO: polling logs
    Sep 19 21:24:00.546: INFO: Pod logs: 
    2022/09/19 21:23:29 OK: Got token
    2022/09/19 21:23:29 validating with in-cluster discovery
    2022/09/19 21:23:29 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/19 21:23:29 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:24:00.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9844" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":124,"skipped":2045,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:24:23.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6994" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":38,"skipped":775,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:24:33.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9729" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":39,"skipped":781,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-wh2f
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 19 21:24:33.909: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wh2f" in namespace "subpath-210" to be "Succeeded or Failed"

    Sep 19 21:24:33.912: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.784902ms
    Sep 19 21:24:35.916: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 2.007344958s
    Sep 19 21:24:37.921: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 4.012029509s
    Sep 19 21:24:39.925: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 6.016415312s
    Sep 19 21:24:41.931: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 8.021588631s
    Sep 19 21:24:43.936: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 10.026949657s
    Sep 19 21:24:45.940: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 12.030765043s
    Sep 19 21:24:47.944: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 14.035305804s
    Sep 19 21:24:49.948: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 16.039413813s
    Sep 19 21:24:51.954: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 18.044948909s
    Sep 19 21:24:53.958: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Running", Reason="", readiness=true. Elapsed: 20.049220481s
    Sep 19 21:24:55.963: INFO: Pod "pod-subpath-test-configmap-wh2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.054018904s
    STEP: Saw pod success
    Sep 19 21:24:55.963: INFO: Pod "pod-subpath-test-configmap-wh2f" satisfied condition "Succeeded or Failed"

    Sep 19 21:24:55.966: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-rzzjq pod pod-subpath-test-configmap-wh2f container test-container-subpath-configmap-wh2f: <nil>
    STEP: delete the pod
    Sep 19 21:24:55.985: INFO: Waiting for pod pod-subpath-test-configmap-wh2f to disappear
    Sep 19 21:24:55.988: INFO: Pod pod-subpath-test-configmap-wh2f no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-wh2f
    Sep 19 21:24:55.988: INFO: Deleting pod "pod-subpath-test-configmap-wh2f" in namespace "subpath-210"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:24:55.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-210" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":790,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-4438-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":41,"skipped":830,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:01.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9676" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":42,"skipped":842,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep 19 21:25:02.875: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep 19 21:25:02.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2084 describe pod agnhost-primary-w7w5w'
    Sep 19 21:25:02.998: INFO: stderr: ""
    Sep 19 21:25:02.998: INFO: stdout: "Name:         agnhost-primary-w7w5w\nNamespace:    kubectl-2084\nPriority:     0\nNode:         k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc/172.18.0.4\nStart Time:   Mon, 19 Sep 2022 21:25:01 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.180\nIPs:\n  IP:           192.168.0.180\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://d3bde5351ba52d582e25600eef452c90041e9e1da763e8f0140f07ce49cefa44\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 19 Sep 2022 21:25:02 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvk45 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kvk45:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-2084/agnhost-primary-w7w5w to k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\n  Normal  Pulled     0s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    0s    kubelet            Created container agnhost-primary\n  Normal  Started    0s    kubelet            Started container agnhost-primary\n"
    Sep 19 21:25:02.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2084 describe rc agnhost-primary'
    Sep 19 21:25:03.120: INFO: stderr: ""
    Sep 19 21:25:03.120: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-2084\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-w7w5w\n"

    Sep 19 21:25:03.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2084 describe service agnhost-primary'
    Sep 19 21:25:03.236: INFO: stderr: ""
    Sep 19 21:25:03.236: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-2084\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.128.134.89\nIPs:               10.128.134.89\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.0.180:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep 19 21:25:03.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2084 describe node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc'
    Sep 19 21:25:03.400: INFO: stderr: ""
    Sep 19 21:25:03.400: INFO: stdout: "Name:               k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\n                    kubernetes.io/os=linux\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-zpmddx\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-9w2xo8\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\n                    cluster.x-k8s.io/owner-kind: MachineSet\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 19 Sep 2022 20:46:34 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 19 Sep 2022 21:24:58 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 19 Sep 2022 21:20:22 +0000   Mon, 19 Sep 2022 20:46:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 19 Sep 2022 21:20:22 +0000   Mon, 19 Sep 2022 20:46:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 19 Sep 2022 21:20:22 +0000   Mon, 19 Sep 2022 20:46:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 19 Sep 2022 21:20:22 +0000   Mon, 19 Sep 2022 20:46:54 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 1118bdda6a12454e86e1f474db27f844\n  System UUID:                1a2b6532-a0e8-4146-b080-fbb75f1a2fb2\n  Boot ID:                    e18b3bab-d416-45a4-94af-1e9e00b6fd4d\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.0.0/24\nPodCIDRs:                     192.168.0.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                       ------------  ----------  ---------------  -------------  ---\n  kube-system                 kindnet-c26lm              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      38m\n  kube-system                 kube-proxy-r46kc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         38m\n  kubectl-2084                agnhost-primary-w7w5w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  pod-network-test-2219       host-test-container-pod    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s\n  pod-network-test-2219       netserver-0                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s\n  pod-network-test-2219       test-container-pod         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                100m (1%)  100m (1%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type    Reason    Age   From        Message\n  ----    ------    ----  ----        -------\n  Normal  Starting  38m   kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:03.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":43,"skipped":851,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:25:03.637: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-8910380b-8302-46dc-8572-05fc97a5a431
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:25:03.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68" in namespace "projected-7297" to be "Succeeded or Failed"

    Sep 19 21:25:03.690: INFO: Pod "pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163627ms
    Sep 19 21:25:05.694: INFO: Pod "pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010643131s
    STEP: Saw pod success
    Sep 19 21:25:05.694: INFO: Pod "pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68" satisfied condition "Succeeded or Failed"

    Sep 19 21:25:05.698: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 19 21:25:05.714: INFO: Waiting for pod pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68 to disappear
    Sep 19 21:25:05.717: INFO: Pod pod-projected-configmaps-8b6d1a20-69e7-4613-a99f-0a6c3cd4ca68 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:05.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7297" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":905,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:16.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":45,"skipped":908,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:16.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-1064" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":46,"skipped":912,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep 19 21:25:22.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9028 explain e2e-test-crd-publish-openapi-4138-crds.spec'
    Sep 19 21:25:22.387: INFO: stderr: ""
    Sep 19 21:25:22.387: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4138-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep 19 21:25:22.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9028 explain e2e-test-crd-publish-openapi-4138-crds.spec.bars'
    Sep 19 21:25:22.628: INFO: stderr: ""
    Sep 19 21:25:22.628: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4138-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep 19 21:25:22.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9028 explain e2e-test-crd-publish-openapi-4138-crds.spec.bars2'
    Sep 19 21:25:22.871: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:25.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9028" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":47,"skipped":933,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:31.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-4381" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":48,"skipped":952,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:47.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2710" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":49,"skipped":960,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:25:47.648: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 19 21:25:47.697: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-73dacc3a-c3a1-45d7-8afa-3c6aa88380e8" in namespace "security-context-test-6895" to be "Succeeded or Failed"

    Sep 19 21:25:47.700: INFO: Pod "alpine-nnp-false-73dacc3a-c3a1-45d7-8afa-3c6aa88380e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969779ms
    Sep 19 21:25:49.705: INFO: Pod "alpine-nnp-false-73dacc3a-c3a1-45d7-8afa-3c6aa88380e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008174111s
    Sep 19 21:25:49.705: INFO: Pod "alpine-nnp-false-73dacc3a-c3a1-45d7-8afa-3c6aa88380e8" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:49.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-6895" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":965,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:25:49.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5" in namespace "projected-5818" to be "Succeeded or Failed"

    Sep 19 21:25:49.794: INFO: Pod "downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.022208ms
    Sep 19 21:25:51.799: INFO: Pod "downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007953633s
    STEP: Saw pod success
    Sep 19 21:25:51.799: INFO: Pod "downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5" satisfied condition "Succeeded or Failed"

    Sep 19 21:25:51.802: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5 container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:25:51.817: INFO: Waiting for pod downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5 to disappear
    Sep 19 21:25:51.820: INFO: Pod downwardapi-volume-e9d7adfa-7088-41e0-b852-fb572de398e5 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:25:51.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5818" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":976,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:26:01.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9875" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":52,"skipped":978,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:26:01.905: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-785bc03f-c669-4211-9d5e-d4847e249eec
    STEP: Creating a pod to test consume configMaps
    Sep 19 21:26:01.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44" in namespace "configmap-221" to be "Succeeded or Failed"

    Sep 19 21:26:01.956: INFO: Pod "pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.899102ms
    Sep 19 21:26:03.962: INFO: Pod "pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009099124s
    STEP: Saw pod success
    Sep 19 21:26:03.962: INFO: Pod "pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44" satisfied condition "Succeeded or Failed"

    Sep 19 21:26:03.964: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 19 21:26:03.990: INFO: Waiting for pod pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44 to disappear
    Sep 19 21:26:03.993: INFO: Pod pod-configmaps-cdae619a-ac45-484c-b38c-f25781120c44 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:26:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":979,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:26:04.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece" in namespace "downward-api-8871" to be "Succeeded or Failed"

    Sep 19 21:26:04.059: INFO: Pod "downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538998ms
    Sep 19 21:26:06.064: INFO: Pod "downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008326322s
    STEP: Saw pod success
    Sep 19 21:26:06.064: INFO: Pod "downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece" satisfied condition "Succeeded or Failed"

    Sep 19 21:26:06.067: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:26:06.083: INFO: Waiting for pod downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece to disappear
    Sep 19 21:26:06.086: INFO: Pod downwardapi-volume-901d2a2c-a542-481c-aa8d-ac95c5b4dece no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:26:06.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":987,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:06.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":990,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:11.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1970" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":56,"skipped":991,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:27:11.502: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 19 21:27:11.538: INFO: Waiting up to 5m0s for pod "pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68" in namespace "emptydir-2248" to be "Succeeded or Failed"

    Sep 19 21:27:11.541: INFO: Pod "pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.886925ms
    Sep 19 21:27:13.545: INFO: Pod "pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006967871s
    STEP: Saw pod success
    Sep 19 21:27:13.545: INFO: Pod "pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68" satisfied condition "Succeeded or Failed"

    Sep 19 21:27:13.548: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68 container test-container: <nil>
    STEP: delete the pod
    Sep 19 21:27:13.566: INFO: Waiting for pod pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68 to disappear
    Sep 19 21:27:13.570: INFO: Pod pod-1df5eecf-7f6c-4d0d-84f8-0568427d6c68 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:13.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2248" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1018,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:27:13.593: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 19 21:27:13.636: INFO: Waiting up to 5m0s for pod "downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1" in namespace "downward-api-1073" to be "Succeeded or Failed"

    Sep 19 21:27:13.640: INFO: Pod "downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.720868ms
    Sep 19 21:27:15.644: INFO: Pod "downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007528742s
    STEP: Saw pod success
    Sep 19 21:27:15.644: INFO: Pod "downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1" satisfied condition "Succeeded or Failed"

    Sep 19 21:27:15.647: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-worker-fjz9jp pod downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1 container dapi-container: <nil>
    STEP: delete the pod
    Sep 19 21:27:15.664: INFO: Waiting for pod downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1 to disappear
    Sep 19 21:27:15.666: INFO: Pod downward-api-39cb0603-85f7-49ed-9728-0b42770a0ee1 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:15.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1073" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1029,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:22.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-566" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":59,"skipped":1034,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 19 21:27:22.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb" in namespace "downward-api-217" to be "Succeeded or Failed"

    Sep 19 21:27:22.912: INFO: Pod "downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221037ms
    Sep 19 21:27:24.916: INFO: Pod "downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008432012s
    STEP: Saw pod success
    Sep 19 21:27:24.916: INFO: Pod "downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb" satisfied condition "Succeeded or Failed"

    Sep 19 21:27:24.919: INFO: Trying to get logs from node k8s-upgrade-and-conformance-zpmddx-md-0-k6xrc-7bb8446fb9-f42kc pod downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb container client-container: <nil>
    STEP: delete the pod
    Sep 19 21:27:24.933: INFO: Waiting for pod downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb to disappear
    Sep 19 21:27:24.936: INFO: Pod downwardapi-volume-9e93c090-8c0a-4457-b56b-ec2e1addf9fb no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 19 21:27:24.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-217" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1101,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-6829-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":61,"skipped":1122,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    Sep 19 21:27:31.941: INFO: Running AfterSuite actions on all nodes
    
    
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":610,"failed":5,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:23:02.894: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 21:26:38.153: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-2836.svc.cluster.local from pod dns-2836/dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3: the server is currently unable to handle the request (get pods dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3)
    Sep 19 21:28:04.953: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-2836/dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2836/pods/dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004636768, 0xc00165dd68, 0xc004636768, 0xc00165dd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 21:28:04.954678      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 21:28:04.953: Unable to read wheezy_hosts@dns-querier-1 from pod dns-2836/dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2836/pods/dns-test-f91e7cb1-f8ad-4475-bca4-7750184048a3/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004636768, 0xc00165dd68, 0xc004636768, 0xc00165dd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc00165dd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc000a09d80, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc00221c800, 0x77b8c18, 0xc0036978c0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc00221c800, 0xc000a09d80, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc00344d240)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc00344d240)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0032e0500, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc000a47000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0032e0500, 0x12f, 0xc00165d7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0032e0500, 0x12f, 0xc00165d890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc00165daf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc004636700, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)
... skipping 83 lines ...
    • [SLOW TEST:360.092 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":125,"skipped":2056,"failed":7,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    Sep 19 21:30:00.688: INFO: Running AfterSuite actions on all nodes
    
    
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":610,"failed":6,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 19 21:28:04.986: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 19 21:31:41.258: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-525.svc.cluster.local from pod dns-525/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad: the server is currently unable to handle the request (get pods dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad)
    Sep 19 21:33:07.045: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-525/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-525/pods/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0015284e0, 0xc00165dd68, 0xc0015284e0, 0xc00165dd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc001e60780, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0919 21:33:07.046601      20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 19 21:33:07.045: Unable to read wheezy_hosts@dns-querier-1 from pod dns-525/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-525/pods/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0015284e0, 0xc00165dd68, 0xc0015284e0, 0xc00165dd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc00165dd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003726000, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc00004c400, 0x77b8c18, 0xc0032c38c0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001082b00, 0xc00004c400, 0xc003726000, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc001e60780)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc001e60780, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc003036140)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc003036140)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0040783c0, 0x12d, 0x86a5e60, 0x7d, 0xd3, 0xc000ff8000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0040783c0, 0x12d, 0xc00165d7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0040783c0, 0x12d, 0xc00165d890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc00165daf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc001528400, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00165dd68, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 19 21:33:07.045: Unable to read wheezy_hosts@dns-querier-1 from pod dns-525/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-525/pods/dns-test-db713a07-4aa9-4058-94c2-b0070531f1ad/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":610,"failed":7,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    Sep 19 21:33:07.080: INFO: Running AfterSuite actions on all nodes
    
    
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 30 lines ...
    Sep 19 21:21:09.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.63:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:21:09.938: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:21:10.052: INFO: Found all 1 expected endpoints: [netserver-1]
    Sep 19 21:21:10.052: INFO: Going to poll 192.168.2.64 on port 8080 at least 0 times, with a maximum of 46 tries before failing
    Sep 19 21:21:10.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:21:10.056: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:21:25.158: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:21:25.158: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:21:27.162: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:21:27.162: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:21:42.263: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:21:42.263: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:21:44.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:21:44.268: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:21:59.345: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:21:59.345: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:22:01.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:22:01.350: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:22:16.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:22:16.416: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:22:18.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:22:18.420: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:22:33.503: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:22:33.503: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:22:35.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:22:35.508: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:22:50.587: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:22:50.587: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:22:52.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:22:52.591: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:23:07.675: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:23:07.675: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:23:09.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:23:09.681: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:23:24.793: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:23:24.793: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:23:26.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:23:26.798: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:23:41.878: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:23:41.878: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:23:43.883: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:23:43.883: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:23:58.957: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:23:58.957: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:24:00.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:24:00.962: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:24:16.045: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:24:16.045: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:24:18.051: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:24:18.051: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:24:33.144: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:24:33.144: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:24:35.148: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:24:35.148: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:24:50.224: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:24:50.225: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:24:52.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:24:52.233: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:25:07.309: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:25:07.309: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:25:09.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:25:09.313: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:25:24.389: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:25:24.390: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:25:26.394: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:25:26.394: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:25:41.479: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:25:41.479: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:25:43.484: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:25:43.484: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:25:58.574: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:25:58.574: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:26:00.578: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:26:00.578: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:26:15.659: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:26:15.659: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:26:17.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:26:17.664: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:26:32.745: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:26:32.745: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:26:34.750: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:26:34.750: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:26:49.852: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:26:49.852: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:26:51.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:26:51.857: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:27:06.946: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:27:06.946: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:27:08.950: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:27:08.950: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:27:24.032: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:27:24.032: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:27:26.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:27:26.036: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:27:41.105: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:27:41.105: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:27:43.109: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:27:43.109: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:27:58.192: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:27:58.192: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:28:00.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:28:00.197: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:28:15.312: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:28:15.312: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:28:17.317: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:28:17.317: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:28:32.425: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:28:32.425: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:28:34.430: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:28:34.430: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:28:49.518: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:28:49.518: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:28:51.523: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:28:51.523: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:29:06.614: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:29:06.614: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:29:08.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:29:08.618: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:29:23.698: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:29:23.698: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:29:25.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:29:25.702: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:29:40.814: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:29:40.814: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:29:42.818: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:29:42.818: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:29:57.909: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:29:57.909: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:29:59.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:29:59.914: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:30:15.007: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:30:15.007: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:30:17.013: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:30:17.013: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:30:32.089: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:30:32.089: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:30:34.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:30:34.094: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:30:49.182: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:30:49.182: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:30:51.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:30:51.188: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:31:06.272: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:31:06.272: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:31:08.276: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:31:08.276: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:31:23.357: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:31:23.357: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:31:25.362: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:31:25.362: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:31:40.450: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:31:40.450: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:31:42.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:31:42.455: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:31:57.542: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:31:57.542: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:31:59.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:31:59.548: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:32:14.640: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:32:14.640: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:32:16.644: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:32:16.644: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:32:31.728: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:32:31.729: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:32:33.733: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:32:33.733: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:32:48.804: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:32:48.804: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:32:50.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:32:50.810: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:33:05.930: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:33:05.930: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:33:07.934: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2219 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 19 21:33:07.934: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 19 21:33:23.033: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.64:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep 19 21:33:23.033: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep 19 21:33:25.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 h