This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-12 20:28
Elapsed1h6m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 906 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading k8s.io/apimachinery v0.25.0
go: downloading github.com/onsi/gomega v1.20.0
... skipping 226 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-6izh7i-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-6izh7i-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-6izh7i created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-6izh7i-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-6izh7i-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x, Cluster k8s-upgrade-and-conformance-n56wbd/k8s-upgrade-and-conformance-6izh7i: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv, Cluster k8s-upgrade-and-conformance-n56wbd/k8s-upgrade-and-conformance-6izh7i: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-6izh7i-xthx7-vtzcf, Cluster k8s-upgrade-and-conformance-n56wbd/k8s-upgrade-and-conformance-6izh7i: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-6izh7i-mp-0, Cluster k8s-upgrade-and-conformance-n56wbd/k8s-upgrade-and-conformance-6izh7i: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/12/22 20:36:15.994
    INFO: Creating namespace k8s-upgrade-and-conformance-n56wbd
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-n56wbd"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep 12 20:44:52.006: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 12 20:44:52.010: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep 12 20:44:52.027: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep 12 20:44:52.076: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:52.076: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:52.076: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:52.076: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:52.076: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:52.076: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep 12 20:44:52.076: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:44:52.076: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:44:52.076: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:44:52.076: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:52.076: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:44:52.076: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:52.076: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:44:52.076: INFO: 
    Sep 12 20:44:54.101: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:54.101: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:54.101: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:54.101: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:54.101: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:54.101: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep 12 20:44:54.101: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:44:54.101: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:44:54.101: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:44:54.101: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:54.101: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:44:54.102: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:54.102: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:44:54.102: INFO: 
    Sep 12 20:44:56.102: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:56.102: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:56.102: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:56.102: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:56.102: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:56.102: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep 12 20:44:56.102: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:44:56.102: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:44:56.102: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:44:56.102: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:56.102: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:44:56.102: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:56.102: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:44:56.102: INFO: 
    Sep 12 20:44:58.100: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:58.100: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:58.100: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:58.100: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:58.100: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:44:58.100: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep 12 20:44:58.100: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:44:58.100: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:44:58.100: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:44:58.100: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:58.100: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:44:58.100: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:44:58.100: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:44:58.100: INFO: 
    Sep 12 20:45:00.100: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:00.101: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:00.101: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:00.101: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:00.101: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:00.101: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep 12 20:45:00.101: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:00.101: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:00.101: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:45:00.101: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:00.101: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:45:00.101: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:00.101: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:45:00.101: INFO: 
    Sep 12 20:45:02.098: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:02.098: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:02.098: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:02.098: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:02.098: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:02.098: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep 12 20:45:02.098: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:02.098: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:02.098: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:45:02.098: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:02.098: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:45:02.098: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:02.098: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:45:02.098: INFO: 
    Sep 12 20:45:04.101: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:04.101: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:04.101: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:04.101: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:04.101: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:04.101: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep 12 20:45:04.101: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:04.101: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:04.101: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:45:04.101: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:04.101: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:45:04.101: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:04.101: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:45:04.101: INFO: 
    Sep 12 20:45:06.100: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:06.100: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:06.100: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:06.100: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:06.100: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:06.100: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep 12 20:45:06.100: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:06.100: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:06.100: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:45:06.100: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:06.100: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:45:06.100: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:06.100: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:45:06.100: INFO: 
    Sep 12 20:45:08.097: INFO: The status of Pod coredns-558bd4d5db-md2b2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:08.097: INFO: The status of Pod kindnet-gq5v4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:08.097: INFO: The status of Pod kindnet-xnwzs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:08.097: INFO: The status of Pod kube-proxy-ndpgl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:08.097: INFO: The status of Pod kube-proxy-rllv6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:08.097: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep 12 20:45:08.097: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:08.097: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:08.097: INFO: coredns-558bd4d5db-md2b2  k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:08 +0000 UTC  }]
    Sep 12 20:45:08.097: INFO: kindnet-gq5v4             k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:08.097: INFO: kindnet-xnwzs             k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:37:52 +0000 UTC  }]
    Sep 12 20:45:08.097: INFO: kube-proxy-ndpgl          k8s-upgrade-and-conformance-6izh7i-worker-8shgi8  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:38:08 +0000 UTC  }]
    Sep 12 20:45:08.097: INFO: kube-proxy-rllv6          k8s-upgrade-and-conformance-6izh7i-worker-2wnuvk  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:44:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:42:21 +0000 UTC  }]
    Sep 12 20:45:08.097: INFO: 
    Sep 12 20:45:10.100: INFO: The status of Pod coredns-558bd4d5db-gzr5m is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 12 20:45:10.100: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep 12 20:45:10.100: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 12 20:45:10.100: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 12 20:45:10.100: INFO: coredns-558bd4d5db-gzr5m  k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:45:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:45:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:45:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 20:45:09 +0000 UTC  }]
    Sep 12 20:45:10.100: INFO: 
    Sep 12 20:45:12.096: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
... skipping 54 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:45:12.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750" in namespace "downward-api-4579" to be "Succeeded or Failed"

    Sep 12 20:45:12.248: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750": Phase="Pending", Reason="", readiness=false. Elapsed: 3.749866ms
    Sep 12 20:45:14.259: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0147168s
    Sep 12 20:45:16.264: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020425823s
    Sep 12 20:45:18.291: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750": Phase="Running", Reason="", readiness=true. Elapsed: 6.047010974s
    Sep 12 20:45:20.297: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053396732s
    STEP: Saw pod success
    Sep 12 20:45:20.297: INFO: Pod "downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:20.301: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:20.330: INFO: Waiting for pod downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750 to disappear
    Sep 12 20:45:20.335: INFO: Pod downwardapi-volume-709f69c3-cc8c-41ab-9919-10f41f36a750 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:20.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4579" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    Sep 12 20:45:12.288: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-9c3d420d-a34c-479d-b7e7-a8471ac286ac
    STEP: Creating a pod to test consume secrets
    Sep 12 20:45:12.336: INFO: Waiting up to 5m0s for pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92" in namespace "secrets-8182" to be "Succeeded or Failed"

    Sep 12 20:45:12.341: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92": Phase="Pending", Reason="", readiness=false. Elapsed: 5.295104ms
    Sep 12 20:45:14.348: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01210682s
    Sep 12 20:45:16.353: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016698217s
    Sep 12 20:45:18.356: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92": Phase="Running", Reason="", readiness=true. Elapsed: 6.019999391s
    Sep 12 20:45:20.361: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025370592s
    STEP: Saw pod success
    Sep 12 20:45:20.361: INFO: Pod "pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:20.365: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov pod pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 20:45:20.396: INFO: Waiting for pod pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92 to disappear
    Sep 12 20:45:20.405: INFO: Pod pod-secrets-d1db4f01-b358-4026-a85d-f47723d37e92 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:20.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8182" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:12.251: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:45:12.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718" in namespace "projected-6657" to be "Succeeded or Failed"

    Sep 12 20:45:12.337: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718": Phase="Pending", Reason="", readiness=false. Elapsed: 20.880595ms
    Sep 12 20:45:14.343: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027240611s
    Sep 12 20:45:16.347: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031171752s
    Sep 12 20:45:18.353: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718": Phase="Running", Reason="", readiness=true. Elapsed: 6.036906064s
    Sep 12 20:45:20.358: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042152537s
    STEP: Saw pod success
    Sep 12 20:45:20.358: INFO: Pod "downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:20.362: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:20.395: INFO: Waiting for pod downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718 to disappear
    Sep 12 20:45:20.402: INFO: Pod downwardapi-volume-bbd34493-b876-431c-9f69-db721b642718 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:20.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6657" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":52,"failed":0}

    
    S
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:45:20.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85" in namespace "projected-4720" to be "Succeeded or Failed"

    Sep 12 20:45:20.496: INFO: Pod "downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85": Phase="Pending", Reason="", readiness=false. Elapsed: 5.019899ms
    Sep 12 20:45:22.500: INFO: Pod "downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009400146s
    STEP: Saw pod success
    Sep 12 20:45:22.500: INFO: Pod "downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:22.503: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:22.548: INFO: Waiting for pod downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85 to disappear
    Sep 12 20:45:22.554: INFO: Pod downwardapi-volume-98b194c0-d03a-4d1c-a975-c774f1852e85 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:22.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4720" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:45:20.493: INFO: The status of Pod server-envvars-30c7e2eb-82fb-40b1-bfff-2181225619db is Pending, waiting for it to be Running (with Ready = true)
    Sep 12 20:45:22.498: INFO: The status of Pod server-envvars-30c7e2eb-82fb-40b1-bfff-2181225619db is Running (Ready = true)
    Sep 12 20:45:22.518: INFO: Waiting up to 5m0s for pod "client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a" in namespace "pods-3862" to be "Succeeded or Failed"

    Sep 12 20:45:22.527: INFO: Pod "client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.859735ms
    Sep 12 20:45:24.531: INFO: Pod "client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013139111s
    STEP: Saw pod success
    Sep 12 20:45:24.531: INFO: Pod "client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:24.534: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a container env3cont: <nil>
    STEP: delete the pod
    Sep 12 20:45:24.547: INFO: Waiting for pod client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a to disappear
    Sep 12 20:45:24.549: INFO: Pod client-envvars-ba7ae767-0018-459d-b9a4-e1359968d18a no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:24.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":56,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:22.643: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-152c1e71-b803-4baa-a9b9-1126e335e2ca
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:45:22.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6" in namespace "configmap-1729" to be "Succeeded or Failed"

    Sep 12 20:45:22.690: INFO: Pod "pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.486559ms
    Sep 12 20:45:24.694: INFO: Pod "pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009353722s
    STEP: Saw pod success
    Sep 12 20:45:24.694: INFO: Pod "pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:24.697: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:24.725: INFO: Waiting for pod pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6 to disappear
    Sep 12 20:45:24.728: INFO: Pod pod-configmaps-9c132e4a-9110-4735-ad73-ea099c6ebae6 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:24.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1729" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":97,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:26.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-1507" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":4,"skipped":104,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:45:27.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8" in namespace "projected-9864" to be "Succeeded or Failed"

    Sep 12 20:45:27.057: INFO: Pod "downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.370536ms
    Sep 12 20:45:29.062: INFO: Pod "downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012097687s
    STEP: Saw pod success
    Sep 12 20:45:29.062: INFO: Pod "downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:29.066: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:29.081: INFO: Waiting for pod downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8 to disappear
    Sep 12 20:45:29.084: INFO: Pod downwardapi-volume-d76392db-a294-47c3-91cb-3a00da722ca8 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9864" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":108,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:29.098: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 12 20:45:29.139: INFO: Waiting up to 5m0s for pod "pod-1c9fa115-a576-4426-b23c-5ce557a1c381" in namespace "emptydir-7388" to be "Succeeded or Failed"

    Sep 12 20:45:29.142: INFO: Pod "pod-1c9fa115-a576-4426-b23c-5ce557a1c381": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246384ms
    Sep 12 20:45:31.147: INFO: Pod "pod-1c9fa115-a576-4426-b23c-5ce557a1c381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008342804s
    STEP: Saw pod success
    Sep 12 20:45:31.147: INFO: Pod "pod-1c9fa115-a576-4426-b23c-5ce557a1c381" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:31.151: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-1c9fa115-a576-4426-b23c-5ce557a1c381 container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:31.167: INFO: Waiting for pod pod-1c9fa115-a576-4426-b23c-5ce557a1c381 to disappear
    Sep 12 20:45:31.172: INFO: Pod pod-1c9fa115-a576-4426-b23c-5ce557a1c381 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:31.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4185" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":125,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:34.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-4657" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:34.847: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 12 20:45:34.889: INFO: Waiting up to 5m0s for pod "downward-api-74dff836-c064-46d3-b677-450823d5a841" in namespace "downward-api-896" to be "Succeeded or Failed"

    Sep 12 20:45:34.893: INFO: Pod "downward-api-74dff836-c064-46d3-b677-450823d5a841": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435994ms
    Sep 12 20:45:36.898: INFO: Pod "downward-api-74dff836-c064-46d3-b677-450823d5a841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009239648s
    STEP: Saw pod success
    Sep 12 20:45:36.898: INFO: Pod "downward-api-74dff836-c064-46d3-b677-450823d5a841" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:36.902: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downward-api-74dff836-c064-46d3-b677-450823d5a841 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:36.917: INFO: Waiting for pod downward-api-74dff836-c064-46d3-b677-450823d5a841 to disappear
    Sep 12 20:45:36.920: INFO: Pod downward-api-74dff836-c064-46d3-b677-450823d5a841 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:36.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-896" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":110,"failed":0}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:31.184: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-4428-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":7,"skipped":110,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:37.302: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-b7ff3b4b-9fe7-4f89-964b-c122a50cd9ed
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:37.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8469" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":8,"skipped":159,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-8628-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":4,"skipped":126,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:37.439: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 12 20:45:37.491: INFO: Waiting up to 5m0s for pod "pod-712bf738-dbed-4622-b2cd-dddb6a5d844d" in namespace "emptydir-941" to be "Succeeded or Failed"

    Sep 12 20:45:37.494: INFO: Pod "pod-712bf738-dbed-4622-b2cd-dddb6a5d844d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935918ms
    Sep 12 20:45:39.499: INFO: Pod "pod-712bf738-dbed-4622-b2cd-dddb6a5d844d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008503283s
    STEP: Saw pod success
    Sep 12 20:45:39.499: INFO: Pod "pod-712bf738-dbed-4622-b2cd-dddb6a5d844d" satisfied condition "Succeeded or Failed"

    Sep 12 20:45:39.503: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-712bf738-dbed-4622-b2cd-dddb6a5d844d container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:45:39.524: INFO: Waiting for pod pod-712bf738-dbed-4622-b2cd-dddb6a5d844d to disappear
    Sep 12 20:45:39.529: INFO: Pod pod-712bf738-dbed-4622-b2cd-dddb6a5d844d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:39.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-941" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":190,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:38.657: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:45:38.760: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e79abf41-eeb0-4b8d-ac75-33e890fa2607" in namespace "security-context-test-502" to be "Succeeded or Failed"

    Sep 12 20:45:38.763: INFO: Pod "busybox-readonly-false-e79abf41-eeb0-4b8d-ac75-33e890fa2607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757324ms
    Sep 12 20:45:40.772: INFO: Pod "busybox-readonly-false-e79abf41-eeb0-4b8d-ac75-33e890fa2607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012608946s
    Sep 12 20:45:40.773: INFO: Pod "busybox-readonly-false-e79abf41-eeb0-4b8d-ac75-33e890fa2607" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:40.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-502" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":202,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:41.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1760" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:41.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5237" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7509" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":6,"skipped":232,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5242" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":269,"failed":0}

    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:44.441: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename podtemplate
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:44.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7351" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":8,"skipped":269,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:45:54.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6948" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":283,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-4273-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":10,"skipped":304,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:45:58.890: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep 12 20:45:58.947: INFO: Waiting up to 5m0s for pod "client-containers-78c97bc3-4249-4a72-b126-8059732954b0" in namespace "containers-1557" to be "Succeeded or Failed"

    Sep 12 20:45:58.952: INFO: Pod "client-containers-78c97bc3-4249-4a72-b126-8059732954b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.929516ms
    Sep 12 20:46:00.956: INFO: Pod "client-containers-78c97bc3-4249-4a72-b126-8059732954b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00910567s
    STEP: Saw pod success
    Sep 12 20:46:00.957: INFO: Pod "client-containers-78c97bc3-4249-4a72-b126-8059732954b0" satisfied condition "Succeeded or Failed"

    Sep 12 20:46:00.960: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod client-containers-78c97bc3-4249-4a72-b126-8059732954b0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:46:00.977: INFO: Waiting for pod client-containers-78c97bc3-4249-4a72-b126-8059732954b0 to disappear
    Sep 12 20:46:00.981: INFO: Pod client-containers-78c97bc3-4249-4a72-b126-8059732954b0 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:00.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-1557" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":322,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:46:01.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0" in namespace "downward-api-4754" to be "Succeeded or Failed"

    Sep 12 20:46:01.055: INFO: Pod "downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673909ms
    Sep 12 20:46:03.060: INFO: Pod "downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008864821s
    STEP: Saw pod success
    Sep 12 20:46:03.060: INFO: Pod "downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0" satisfied condition "Succeeded or Failed"

    Sep 12 20:46:03.063: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:46:03.084: INFO: Waiting for pod downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0 to disappear
    Sep 12 20:46:03.088: INFO: Pod downwardapi-volume-d0563300-a036-48b7-878f-079cc5a14df0 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:03.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4754" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":337,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-9244
    STEP: Creating statefulset with conflicting port in namespace statefulset-9244
    STEP: Waiting until pod test-pod will start running in namespace statefulset-9244
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9244
    Sep 12 20:46:09.208: INFO: Observed stateful pod in namespace: statefulset-9244, name: ss-0, uid: a92617c5-f412-478f-a3ea-9813bc7ecbd2, status phase: Pending. Waiting for statefulset controller to delete.
    Sep 12 20:46:09.683: INFO: Observed stateful pod in namespace: statefulset-9244, name: ss-0, uid: a92617c5-f412-478f-a3ea-9813bc7ecbd2, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 12 20:46:09.690: INFO: Observed stateful pod in namespace: statefulset-9244, name: ss-0, uid: a92617c5-f412-478f-a3ea-9813bc7ecbd2, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 12 20:46:09.692: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9244
    STEP: Removing pod with conflicting port in namespace statefulset-9244
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9244 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep 12 20:46:13.719: INFO: Deleting all statefulset in ns statefulset-9244
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-9244" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":13,"skipped":366,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:35.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-8784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":369,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:48.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9484" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8287" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":238,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:46:52.410: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-3863/secret-test-8080dfe7-660a-44bb-90d7-1c7534990bb0
    STEP: Creating a pod to test consume secrets
    Sep 12 20:46:52.445: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00" in namespace "secrets-3863" to be "Succeeded or Failed"

    Sep 12 20:46:52.447: INFO: Pod "pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492908ms
    Sep 12 20:46:54.452: INFO: Pod "pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00752624s
    STEP: Saw pod success
    Sep 12 20:46:54.452: INFO: Pod "pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00" satisfied condition "Succeeded or Failed"

    Sep 12 20:46:54.455: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00 container env-test: <nil>
    STEP: delete the pod
    Sep 12 20:46:54.469: INFO: Waiting for pod pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00 to disappear
    Sep 12 20:46:54.471: INFO: Pod pod-configmaps-c3cd8274-459f-4321-9e7e-50dd9b2c2b00 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3863" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":244,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:46:54.491: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep 12 20:46:54.528: INFO: Waiting up to 5m0s for pod "client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f" in namespace "containers-3123" to be "Succeeded or Failed"

    Sep 12 20:46:54.530: INFO: Pod "client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682306ms
    Sep 12 20:46:56.536: INFO: Pod "client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007772122s
    STEP: Saw pod success
    Sep 12 20:46:56.536: INFO: Pod "client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f" satisfied condition "Succeeded or Failed"

    Sep 12 20:46:56.539: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:46:56.559: INFO: Waiting for pod client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f to disappear
    Sep 12 20:46:56.563: INFO: Pod client-containers-ccbe6bd0-2c48-4910-8d46-b0e151906b1f no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:56.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3123" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":250,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:46:58.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1568" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":14,"skipped":305,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:00.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-392" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":15,"skipped":312,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9689" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":16,"skipped":324,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:16.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-2071" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":17,"skipped":340,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:47:16.772: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep 12 20:47:16.809: INFO: Waiting up to 5m0s for pod "client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5" in namespace "containers-9679" to be "Succeeded or Failed"

    Sep 12 20:47:16.812: INFO: Pod "client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.95974ms
    Sep 12 20:47:18.817: INFO: Pod "client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008365796s
    STEP: Saw pod success
    Sep 12 20:47:18.818: INFO: Pod "client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5" satisfied condition "Succeeded or Failed"

    Sep 12 20:47:18.820: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:47:18.842: INFO: Waiting for pod client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5 to disappear
    Sep 12 20:47:18.847: INFO: Pod client-containers-25db5cbe-6d1f-4e0a-8a79-1762e9cb01c5 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:18.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-9679" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":429,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:47:18.879: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 12 20:47:18.915: INFO: Waiting up to 5m0s for pod "pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6" in namespace "emptydir-7908" to be "Succeeded or Failed"

    Sep 12 20:47:18.918: INFO: Pod "pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.732133ms
    Sep 12 20:47:20.922: INFO: Pod "pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006980965s
    STEP: Saw pod success
    Sep 12 20:47:20.923: INFO: Pod "pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6" satisfied condition "Succeeded or Failed"

    Sep 12 20:47:20.925: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6 container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:47:20.937: INFO: Waiting for pod pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6 to disappear
    Sep 12 20:47:20.940: INFO: Pod pod-2d0f26c6-0fdf-464d-8eca-2ce8d12a84e6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:20.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7908" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":442,"failed":0}

    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:47:20.949: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename resourcequota
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:37.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2799" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":20,"skipped":442,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:47:41.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-6956" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":21,"skipped":445,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-3290
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3290 to expose endpoints map[]
    Sep 12 20:47:41.255: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep 12 20:47:42.264: INFO: successfully validated that service endpoint-test2 in namespace services-3290 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-3290
    Sep 12 20:47:42.272: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 12 20:47:44.276: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3290 to expose endpoints map[pod1:[80]]
    Sep 12 20:47:44.289: INFO: successfully validated that service endpoint-test2 in namespace services-3290 exposes endpoints map[pod1:[80]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-3290" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":22,"skipped":470,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 139 lines ...
    Sep 12 20:47:27.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinitykg6lh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31993'
    Sep 12 20:47:29.715: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 31993\nConnection to 172.18.0.6 31993 port [tcp/*] succeeded!\n"
    Sep 12 20:47:29.715: INFO: stdout: ""
    Sep 12 20:47:29.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6210 exec execpod-affinitykg6lh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31993'
    Sep 12 20:47:31.898: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 31993\nConnection to 172.18.0.6 31993 port [tcp/*] succeeded!\n"
    Sep 12 20:47:31.898: INFO: stdout: ""
    Sep 12 20:47:31.898: FAIL: Unexpected error:

        <*errors.errorString | 0xc0044e2350>: {
            s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.6:31993 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint 172.18.0.6:31993 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [147.083 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 20:47:31.898: Unexpected error:

          <*errors.errorString | 0xc0044e2350>: {
              s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.6:31993 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint 172.18.0.6:31993 over TCP protocol
      occurred
    
... skipping 109 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:07.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-8672" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":15,"skipped":381,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-jmnx
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 12 20:47:46.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jmnx" in namespace "subpath-9684" to be "Succeeded or Failed"

    Sep 12 20:47:46.678: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.956709ms
    Sep 12 20:47:48.684: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 2.025998882s
    Sep 12 20:47:50.690: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 4.032516098s
    Sep 12 20:47:52.693: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 6.035672088s
    Sep 12 20:47:54.699: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 8.041447292s
    Sep 12 20:47:56.704: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 10.045945337s
    Sep 12 20:47:58.709: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 12.051126288s
    Sep 12 20:48:00.715: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 14.056820042s
    Sep 12 20:48:02.719: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 16.060958818s
    Sep 12 20:48:04.724: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 18.065892838s
    Sep 12 20:48:06.728: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Running", Reason="", readiness=true. Elapsed: 20.070634474s
    Sep 12 20:48:08.732: INFO: Pod "pod-subpath-test-downwardapi-jmnx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.074561264s
    STEP: Saw pod success
    Sep 12 20:48:08.732: INFO: Pod "pod-subpath-test-downwardapi-jmnx" satisfied condition "Succeeded or Failed"

    Sep 12 20:48:08.735: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov pod pod-subpath-test-downwardapi-jmnx container test-container-subpath-downwardapi-jmnx: <nil>
    STEP: delete the pod
    Sep 12 20:48:08.758: INFO: Waiting for pod pod-subpath-test-downwardapi-jmnx to disappear
    Sep 12 20:48:08.761: INFO: Pod pod-subpath-test-downwardapi-jmnx no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-jmnx
    Sep 12 20:48:08.761: INFO: Deleting pod "pod-subpath-test-downwardapi-jmnx" in namespace "subpath-9684"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:08.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9684" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":479,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:08.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":24,"skipped":491,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:16.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7376" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":493,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:18.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-7003" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":26,"skipped":509,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:48:18.261: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 12 20:48:18.295: INFO: Waiting up to 5m0s for pod "pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566" in namespace "emptydir-5389" to be "Succeeded or Failed"

    Sep 12 20:48:18.298: INFO: Pod "pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566": Phase="Pending", Reason="", readiness=false. Elapsed: 3.346595ms
    Sep 12 20:48:20.302: INFO: Pod "pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00701229s
    STEP: Saw pod success
    Sep 12 20:48:20.302: INFO: Pod "pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566" satisfied condition "Succeeded or Failed"

    Sep 12 20:48:20.305: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566 container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:48:20.324: INFO: Waiting for pod pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566 to disappear
    Sep 12 20:48:20.326: INFO: Pod pod-ee01acc6-c9b5-4faf-89fb-a8b2f7980566 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:20.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5389" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":532,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1100" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":382,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:31.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3682" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":17,"skipped":386,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:37.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-7033" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":18,"skipped":391,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-2021" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":19,"skipped":413,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 12 20:48:22.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.459: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.487: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.491: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:22.521: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:27.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:27.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:27.565: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:27.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:27.593: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:32.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:32.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:32.563: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:32.566: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:32.593: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:37.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:37.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:37.585: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:37.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:37.647: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:42.527: INFO: Unable to read wheezy_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:42.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:42.574: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:42.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:42.624: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:47.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:47.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:47.574: INFO: Unable to read jessie_udp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:47.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-4450.svc.cluster.local from pod dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb: the server could not find the requested resource (get pods dns-test-4e3534d6-1aad-45a3-8317-f415115051bb)
    Sep 12 20:48:47.619: INFO: Lookups using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb failed for: [wheezy_udp@dns-test-service.dns-4450.svc.cluster.local wheezy_tcp@dns-test-service.dns-4450.svc.cluster.local jessie_udp@dns-test-service.dns-4450.svc.cluster.local jessie_tcp@dns-test-service.dns-4450.svc.cluster.local]

    
    Sep 12 20:48:52.616: INFO: DNS probes using dns-4450/dns-test-4e3534d6-1aad-45a3-8317-f415115051bb succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:52.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-4450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":28,"skipped":557,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:48:52.725: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-3511/configmap-test-958df07b-882e-4c32-82d9-a6d5ecee48a8
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:48:52.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955" in namespace "configmap-3511" to be "Succeeded or Failed"

    Sep 12 20:48:52.794: INFO: Pod "pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382924ms
    Sep 12 20:48:54.800: INFO: Pod "pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009353746s
    STEP: Saw pod success
    Sep 12 20:48:54.800: INFO: Pod "pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955" satisfied condition "Succeeded or Failed"

    Sep 12 20:48:54.804: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955 container env-test: <nil>
    STEP: delete the pod
    Sep 12 20:48:54.823: INFO: Waiting for pod pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955 to disappear
    Sep 12 20:48:54.826: INFO: Pod pod-configmaps-f252cd50-5ccf-4a51-a579-815cb551c955 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:54.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3511" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":558,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:54.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":30,"skipped":565,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:48:55.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-6123" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":20,"skipped":416,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-5375-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":21,"skipped":420,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:17.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-9855" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":572,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:49:17.283: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 12 20:49:17.331: INFO: Waiting up to 5m0s for pod "pod-0d8abcc8-0000-42b4-9d22-04b9795f821c" in namespace "emptydir-8080" to be "Succeeded or Failed"

    Sep 12 20:49:17.336: INFO: Pod "pod-0d8abcc8-0000-42b4-9d22-04b9795f821c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.886716ms
    Sep 12 20:49:19.341: INFO: Pod "pod-0d8abcc8-0000-42b4-9d22-04b9795f821c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009206885s
    STEP: Saw pod success
    Sep 12 20:49:19.341: INFO: Pod "pod-0d8abcc8-0000-42b4-9d22-04b9795f821c" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:19.344: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-0d8abcc8-0000-42b4-9d22-04b9795f821c container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:49:19.366: INFO: Waiting for pod pod-0d8abcc8-0000-42b4-9d22-04b9795f821c to disappear
    Sep 12 20:49:19.369: INFO: Pod pod-0d8abcc8-0000-42b4-9d22-04b9795f821c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:19.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8080" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":590,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-4887" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":33,"skipped":603,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:49:19.489: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:49:21.533: INFO: Deleting pod "var-expansion-48a0a651-b4a8-4a60-ae35-6e82da1cbbf0" in namespace "var-expansion-2553"
    Sep 12 20:49:21.541: INFO: Wait up to 5m0s for pod "var-expansion-48a0a651-b4a8-4a60-ae35-6e82da1cbbf0" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:31.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2553" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":34,"skipped":622,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:36.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7288" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":35,"skipped":632,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:49:36.721: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-331617ef-a121-434d-b89d-21de660a3f74
    STEP: Creating a pod to test consume secrets
    Sep 12 20:49:36.753: INFO: Waiting up to 5m0s for pod "pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544" in namespace "secrets-8286" to be "Succeeded or Failed"

    Sep 12 20:49:36.755: INFO: Pod "pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28493ms
    Sep 12 20:49:38.761: INFO: Pod "pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007701478s
    STEP: Saw pod success
    Sep 12 20:49:38.761: INFO: Pod "pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:38.764: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 20:49:38.780: INFO: Waiting for pod pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544 to disappear
    Sep 12 20:49:38.783: INFO: Pod pod-secrets-51fa4a49-1c99-41a5-a455-7abc6390b544 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:38.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8286" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":664,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:50.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":37,"skipped":668,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:49:50.525: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-170454b8-a459-4866-91e1-ced564e9409a
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:49:50.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d" in namespace "configmap-180" to be "Succeeded or Failed"

    Sep 12 20:49:50.571: INFO: Pod "pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.847206ms
    Sep 12 20:49:52.575: INFO: Pod "pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007415164s
    STEP: Saw pod success
    Sep 12 20:49:52.575: INFO: Pod "pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:52.578: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:49:52.592: INFO: Waiting for pod pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d to disappear
    Sep 12 20:49:52.594: INFO: Pod pod-configmaps-31cc7ea9-8469-4bdf-8cc2-ffda3c5f7d5d no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:52.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-180" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":688,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:52.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9108" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":39,"skipped":700,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:49:52.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751" in namespace "downward-api-7248" to be "Succeeded or Failed"

    Sep 12 20:49:52.742: INFO: Pod "downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89039ms
    Sep 12 20:49:54.746: INFO: Pod "downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006926622s
    STEP: Saw pod success
    Sep 12 20:49:54.746: INFO: Pod "downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:54.749: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:49:54.764: INFO: Waiting for pod downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751 to disappear
    Sep 12 20:49:54.767: INFO: Pod downwardapi-volume-1b361473-a909-45e7-8f6a-9cbfcd9b3751 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:54.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7248" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":711,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:49:54.818: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-b728bb29-46f7-41e5-b323-8ce133bcb35e
    STEP: Creating a pod to test consume secrets
    Sep 12 20:49:54.893: INFO: Waiting up to 5m0s for pod "pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49" in namespace "secrets-5872" to be "Succeeded or Failed"

    Sep 12 20:49:54.896: INFO: Pod "pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003831ms
    Sep 12 20:49:56.900: INFO: Pod "pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007824511s
    STEP: Saw pod success
    Sep 12 20:49:56.900: INFO: Pod "pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:56.903: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 20:49:56.917: INFO: Waiting for pod pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49 to disappear
    Sep 12 20:49:56.920: INFO: Pod pod-secrets-a29334d1-fccd-4a2d-84c7-5f2c6270da49 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:56.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5872" for this suite.
    STEP: Destroying namespace "secret-namespace-1449" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":738,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:49:56.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374" in namespace "projected-5872" to be "Succeeded or Failed"

    Sep 12 20:49:56.984: INFO: Pod "downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374": Phase="Pending", Reason="", readiness=false. Elapsed: 5.523333ms
    Sep 12 20:49:58.992: INFO: Pod "downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013797938s
    STEP: Saw pod success
    Sep 12 20:49:58.992: INFO: Pod "downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374" satisfied condition "Succeeded or Failed"

    Sep 12 20:49:58.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:49:59.027: INFO: Waiting for pod downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374 to disappear
    Sep 12 20:49:59.030: INFO: Pod downwardapi-volume-c1b3e284-5410-479b-ba3d-8a3bd49c1374 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:49:59.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5872" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":743,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep 12 20:50:01.675: INFO: Successfully updated pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a"
    Sep 12 20:50:01.675: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a" in namespace "pods-9418" to be "terminated due to deadline exceeded"
    Sep 12 20:50:01.678: INFO: Pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a": Phase="Running", Reason="", readiness=true. Elapsed: 2.872048ms
    Sep 12 20:50:03.682: INFO: Pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a": Phase="Running", Reason="", readiness=true. Elapsed: 2.007080119s
    Sep 12 20:50:05.687: INFO: Pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.011734959s

    Sep 12 20:50:05.687: INFO: Pod "pod-update-activedeadlineseconds-333b46e7-bd20-4000-b56b-ab9df6120b6a" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:05.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9418" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":762,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:47:47.526: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 139 lines ...
    Sep 12 20:49:54.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4589 exec execpod-affinityrv4zv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30046'
    Sep 12 20:49:56.605: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30046\nConnection to 172.18.0.4 30046 port [tcp/*] succeeded!\n"
    Sep 12 20:49:56.605: INFO: stdout: ""
    Sep 12 20:49:56.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4589 exec execpod-affinityrv4zv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30046'
    Sep 12 20:49:58.792: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30046\nConnection to 172.18.0.4 30046 port [tcp/*] succeeded!\n"
    Sep 12 20:49:58.792: INFO: stdout: ""
    Sep 12 20:49:58.792: FAIL: Unexpected error:

        <*errors.errorString | 0xc0042b4250>: {
            s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.4:30046 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint 172.18.0.4:30046 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [140.005 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 20:49:58.792: Unexpected error:

          <*errors.errorString | 0xc0042b4250>: {
              s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.4:30046 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint 172.18.0.4:30046 over TCP protocol
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-4761/configmap-test-a277cd86-4d23-4046-860d-4b054198b383
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:50:05.798: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef" in namespace "configmap-4761" to be "Succeeded or Failed"

    Sep 12 20:50:05.802: INFO: Pod "pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263858ms
    Sep 12 20:50:07.806: INFO: Pod "pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007821931s
    STEP: Saw pod success
    Sep 12 20:50:07.806: INFO: Pod "pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef" satisfied condition "Succeeded or Failed"

    Sep 12 20:50:07.809: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef container env-test: <nil>
    STEP: delete the pod
    Sep 12 20:50:07.827: INFO: Waiting for pod pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef to disappear
    Sep 12 20:50:07.829: INFO: Pod pod-configmaps-f4653b6e-824b-4042-b78b-69d01b975bef no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4761" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":805,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:08.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9484" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":45,"skipped":853,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:50:07.535: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
    STEP: Destroying namespace "services-9401" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:36.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1325" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":4,"skipped":17,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:36.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-406" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":5,"skipped":48,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
    STEP: Destroying namespace "services-4066" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":6,"skipped":67,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:50.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4429" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":76,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5571" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":46,"skipped":904,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:50:55.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3532" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":910,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:50:50.215: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:50:52.264: INFO: Deleting pod "var-expansion-f21735d6-ca0c-4d56-b366-008a5c0d5604" in namespace "var-expansion-4298"
    Sep 12 20:50:52.271: INFO: Wait up to 5m0s for pod "var-expansion-f21735d6-ca0c-4d56-b366-008a5c0d5604" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:00.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-4298" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":8,"skipped":145,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:12.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-45" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":48,"skipped":926,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:12.687: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-070d1fe2-ff88-4e72-9059-e5fdd54e9264
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:51:12.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65" in namespace "configmap-569" to be "Succeeded or Failed"

    Sep 12 20:51:12.738: INFO: Pod "pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.911765ms
    Sep 12 20:51:14.745: INFO: Pod "pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009874416s
    STEP: Saw pod success
    Sep 12 20:51:14.745: INFO: Pod "pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65" satisfied condition "Succeeded or Failed"

    Sep 12 20:51:14.748: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:51:14.764: INFO: Waiting for pod pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65 to disappear
    Sep 12 20:51:14.767: INFO: Pod pod-configmaps-12e67660-a9a8-426a-a2b7-b14fb5d04e65 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-569" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":927,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:14.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-3864" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":50,"skipped":984,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:16.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-3909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":990,"failed":0}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:16.030: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:51:16.081: INFO: Waiting up to 5m0s for pod "busybox-user-65534-43f55120-1c6a-45a6-8586-01293e7ab69c" in namespace "security-context-test-1865" to be "Succeeded or Failed"

    Sep 12 20:51:16.085: INFO: Pod "busybox-user-65534-43f55120-1c6a-45a6-8586-01293e7ab69c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296999ms
    Sep 12 20:51:18.091: INFO: Pod "busybox-user-65534-43f55120-1c6a-45a6-8586-01293e7ab69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009953863s
    Sep 12 20:51:18.091: INFO: Pod "busybox-user-65534-43f55120-1c6a-45a6-8586-01293e7ab69c" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-1865" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":990,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7941" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":9,"skipped":147,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:23.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-160" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":10,"skipped":150,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:26.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-247" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":11,"skipped":186,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:26.671: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:28.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":186,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:29.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4923" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":13,"skipped":214,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:29.573: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-255f03c3-ad16-4955-a4f0-1ffe3a3db932
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:51:29.617: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750" in namespace "projected-2466" to be "Succeeded or Failed"

    Sep 12 20:51:29.621: INFO: Pod "pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421951ms
    Sep 12 20:51:31.626: INFO: Pod "pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008248632s
    STEP: Saw pod success
    Sep 12 20:51:31.626: INFO: Pod "pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750" satisfied condition "Succeeded or Failed"

    Sep 12 20:51:31.629: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:51:31.648: INFO: Waiting for pod pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750 to disappear
    Sep 12 20:51:31.651: INFO: Pod pod-projected-configmaps-5930a3ca-30f5-4fd1-bf9b-763773822750 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:31.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2466" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":226,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:31.688: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 20:51:31.732: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ebe60c8c-3b10-459e-86c5-8ef08214299d" in namespace "security-context-test-4935" to be "Succeeded or Failed"

    Sep 12 20:51:31.736: INFO: Pod "busybox-privileged-false-ebe60c8c-3b10-459e-86c5-8ef08214299d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091869ms
    Sep 12 20:51:33.741: INFO: Pod "busybox-privileged-false-ebe60c8c-3b10-459e-86c5-8ef08214299d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009361192s
    Sep 12 20:51:33.741: INFO: Pod "busybox-privileged-false-ebe60c8c-3b10-459e-86c5-8ef08214299d" satisfied condition "Succeeded or Failed"

    Sep 12 20:51:33.747: INFO: Got logs for pod "busybox-privileged-false-ebe60c8c-3b10-459e-86c5-8ef08214299d": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:33.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-4935" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":242,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 344 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:37.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-6523" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":53,"skipped":1004,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:37.572: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-b0880e04-39bd-424d-9920-3aa30e2fb0ec
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:51:37.609: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5" in namespace "configmap-443" to be "Succeeded or Failed"

    Sep 12 20:51:37.612: INFO: Pod "pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368949ms
    Sep 12 20:51:39.616: INFO: Pod "pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006005936s
    STEP: Saw pod success
    Sep 12 20:51:39.616: INFO: Pod "pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5" satisfied condition "Succeeded or Failed"

    Sep 12 20:51:39.618: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:51:39.632: INFO: Waiting for pod pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5 to disappear
    Sep 12 20:51:39.636: INFO: Pod pod-configmaps-5d69668e-1940-4adc-8ab1-b27ebd14d9d5 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:39.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-443" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1007,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:51:39.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb" in namespace "downward-api-2448" to be "Succeeded or Failed"

    Sep 12 20:51:39.722: INFO: Pod "downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.589481ms
    Sep 12 20:51:41.727: INFO: Pod "downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009363995s
    STEP: Saw pod success
    Sep 12 20:51:41.727: INFO: Pod "downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb" satisfied condition "Succeeded or Failed"

    Sep 12 20:51:41.730: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:51:41.748: INFO: Waiting for pod downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb to disappear
    Sep 12 20:51:41.750: INFO: Pod downwardapi-volume-9bc85e9e-da04-45ad-bc16-9f59cb65faeb no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:41.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2448" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1037,"failed":0}

    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:51:41.761: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename endpointslice
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:41.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-1608" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":56,"skipped":1037,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:51:46.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-4102" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":57,"skipped":1040,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-4hsq
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 12 20:51:46.103: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4hsq" in namespace "subpath-916" to be "Succeeded or Failed"

    Sep 12 20:51:46.106: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.238233ms
    Sep 12 20:51:48.110: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 2.007345956s
    Sep 12 20:51:50.116: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 4.012729073s
    Sep 12 20:51:52.120: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 6.017392266s
    Sep 12 20:51:54.126: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 8.022792781s
    Sep 12 20:51:56.130: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 10.027491622s
    Sep 12 20:51:58.135: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 12.032238729s
    Sep 12 20:52:00.141: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 14.038039011s
    Sep 12 20:52:02.145: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 16.042349185s
    Sep 12 20:52:04.150: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 18.046655315s
    Sep 12 20:52:06.155: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Running", Reason="", readiness=true. Elapsed: 20.051734598s
    Sep 12 20:52:08.159: INFO: Pod "pod-subpath-test-configmap-4hsq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056536301s
    STEP: Saw pod success
    Sep 12 20:52:08.160: INFO: Pod "pod-subpath-test-configmap-4hsq" satisfied condition "Succeeded or Failed"

    Sep 12 20:52:08.163: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-subpath-test-configmap-4hsq container test-container-subpath-configmap-4hsq: <nil>
    STEP: delete the pod
    Sep 12 20:52:08.186: INFO: Waiting for pod pod-subpath-test-configmap-4hsq to disappear
    Sep 12 20:52:08.189: INFO: Pod pod-subpath-test-configmap-4hsq no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-4hsq
    Sep 12 20:52:08.190: INFO: Deleting pod "pod-subpath-test-configmap-4hsq" in namespace "subpath-916"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:08.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":58,"skipped":1044,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-2295-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":59,"skipped":1053,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:15.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3020" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":60,"skipped":1074,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-6534" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":61,"skipped":1116,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:52:15.257: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 12 20:52:15.304: INFO: Waiting up to 5m0s for pod "pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66" in namespace "emptydir-1337" to be "Succeeded or Failed"

    Sep 12 20:52:15.310: INFO: Pod "pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66": Phase="Pending", Reason="", readiness=false. Elapsed: 5.469087ms
    Sep 12 20:52:17.316: INFO: Pod "pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012132605s
    STEP: Saw pod success
    Sep 12 20:52:17.316: INFO: Pod "pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66" satisfied condition "Succeeded or Failed"

    Sep 12 20:52:17.319: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66 container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:52:17.333: INFO: Waiting for pod pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66 to disappear
    Sep 12 20:52:17.336: INFO: Pod pod-16ccf2ee-497b-4bc4-b6a2-e0c121500b66 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:17.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1337" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1129,"failed":0}

    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:52:17.346: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 12 20:52:17.390: INFO: Waiting up to 5m0s for pod "downward-api-749fd388-8d41-4d88-9318-7ce446007fa9" in namespace "downward-api-2204" to be "Succeeded or Failed"

    Sep 12 20:52:17.393: INFO: Pod "downward-api-749fd388-8d41-4d88-9318-7ce446007fa9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377832ms
    Sep 12 20:52:19.398: INFO: Pod "downward-api-749fd388-8d41-4d88-9318-7ce446007fa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008233823s
    STEP: Saw pod success
    Sep 12 20:52:19.398: INFO: Pod "downward-api-749fd388-8d41-4d88-9318-7ce446007fa9" satisfied condition "Succeeded or Failed"

    Sep 12 20:52:19.401: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod downward-api-749fd388-8d41-4d88-9318-7ce446007fa9 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 20:52:19.417: INFO: Waiting for pod downward-api-749fd388-8d41-4d88-9318-7ce446007fa9 to disappear
    Sep 12 20:52:19.420: INFO: Pod downward-api-749fd388-8d41-4d88-9318-7ce446007fa9 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:19.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2204" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1129,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:30.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6116" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":64,"skipped":1140,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:32.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7938" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1156,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:52:32.643: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 12 20:52:32.682: INFO: Waiting up to 5m0s for pod "pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24" in namespace "emptydir-3343" to be "Succeeded or Failed"

    Sep 12 20:52:32.686: INFO: Pod "pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.299737ms
    Sep 12 20:52:34.690: INFO: Pod "pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008113842s
    STEP: Saw pod success
    Sep 12 20:52:34.690: INFO: Pod "pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24" satisfied condition "Succeeded or Failed"

    Sep 12 20:52:34.693: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24 container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:52:34.707: INFO: Waiting for pod pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24 to disappear
    Sep 12 20:52:34.710: INFO: Pod pod-5a078f36-8b46-4406-8ce8-7b5ceb52ec24 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:34.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3343" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1183,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:52:34.742: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 12 20:52:34.778: INFO: Waiting up to 5m0s for pod "pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f" in namespace "emptydir-2295" to be "Succeeded or Failed"

    Sep 12 20:52:34.781: INFO: Pod "pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.786195ms
    Sep 12 20:52:36.785: INFO: Pod "pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007124875s
    STEP: Saw pod success
    Sep 12 20:52:36.785: INFO: Pod "pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f" satisfied condition "Succeeded or Failed"

    Sep 12 20:52:36.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f container test-container: <nil>
    STEP: delete the pod
    Sep 12 20:52:36.804: INFO: Waiting for pod pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f to disappear
    Sep 12 20:52:36.806: INFO: Pod pod-a9bf6388-d135-4b6e-9b8f-ff0a71fc786f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:36.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2295" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1200,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2499" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":68,"skipped":1216,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:52:41.945: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:52:47.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-5345" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":69,"skipped":1224,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:01.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-9200" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":16,"skipped":244,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:01.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-4035" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":17,"skipped":273,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 50 lines ...
    STEP: Destroying namespace "crd-webhook-3785" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":18,"skipped":277,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":438,"failed":0}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:04.830: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5697" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":438,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:10.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-4798" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":284,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-8062" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":20,"skipped":333,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:19.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-841" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":24,"skipped":448,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:20.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-166" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":21,"skipped":334,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4501" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":25,"skipped":461,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:41.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-3926" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":22,"skipped":364,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:41.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-7456" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":26,"skipped":494,"failed":0}

    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:41.650: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename events
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:41.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1513" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":27,"skipped":494,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 20:53:41.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164" in namespace "projected-2232" to be "Succeeded or Failed"

    Sep 12 20:53:41.643: INFO: Pod "downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993266ms
    Sep 12 20:53:43.647: INFO: Pod "downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007473134s
    STEP: Saw pod success
    Sep 12 20:53:43.647: INFO: Pod "downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164" satisfied condition "Succeeded or Failed"

    Sep 12 20:53:43.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164 container client-container: <nil>
    STEP: delete the pod
    Sep 12 20:53:43.665: INFO: Waiting for pod downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164 to disappear
    Sep 12 20:53:43.667: INFO: Pod downwardapi-volume-ab7aff79-a03f-4a64-bca6-fd8c7cbfe164 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:43.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2232" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":365,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:44.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7854" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":28,"skipped":538,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:43.703: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-3ec8c961-3a4f-4430-a6d2-73d887f514c3
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:53:43.742: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d" in namespace "projected-8147" to be "Succeeded or Failed"

    Sep 12 20:53:43.745: INFO: Pod "pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833773ms
    Sep 12 20:53:45.749: INFO: Pod "pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006843468s
    STEP: Saw pod success
    Sep 12 20:53:45.749: INFO: Pod "pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d" satisfied condition "Succeeded or Failed"

    Sep 12 20:53:45.752: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:53:45.767: INFO: Waiting for pod pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d to disappear
    Sep 12 20:53:45.770: INFO: Pod pod-projected-configmaps-a29ceaa7-3b81-4bdf-9fda-21087353c54d no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:45.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8147" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":385,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:45.791: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-d980eb65-ed93-4639-b57e-a2a855c2365f
    STEP: Creating a pod to test consume configMaps
    Sep 12 20:53:45.833: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249" in namespace "projected-5651" to be "Succeeded or Failed"

    Sep 12 20:53:45.837: INFO: Pod "pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095045ms
    Sep 12 20:53:47.841: INFO: Pod "pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007458055s
    STEP: Saw pod success
    Sep 12 20:53:47.841: INFO: Pod "pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249" satisfied condition "Succeeded or Failed"

    Sep 12 20:53:47.844: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 20:53:47.862: INFO: Waiting for pod pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249 to disappear
    Sep 12 20:53:47.866: INFO: Pod pod-projected-configmaps-6409a575-6f89-440b-9006-e2fffa590249 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:47.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5651" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":389,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:49.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-9978" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":397,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:49.962: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-254831c5-e3ea-467e-8189-2ee5d3fb7312
    STEP: Creating a pod to test consume secrets
    Sep 12 20:53:50.005: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9" in namespace "projected-9835" to be "Succeeded or Failed"

    Sep 12 20:53:50.008: INFO: Pod "pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.63174ms
    Sep 12 20:53:52.013: INFO: Pod "pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007680265s
    STEP: Saw pod success
    Sep 12 20:53:52.013: INFO: Pod "pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9" satisfied condition "Succeeded or Failed"

    Sep 12 20:53:52.015: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 20:53:52.035: INFO: Waiting for pod pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9 to disappear
    Sep 12 20:53:52.039: INFO: Pod pod-projected-secrets-3bde54eb-bcf7-4a28-8993-4538c49659a9 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:52.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9835" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":401,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:54.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1984" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":29,"skipped":548,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:53:59.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-745" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":28,"skipped":445,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:53:59.231: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 12 20:53:59.265: INFO: Waiting up to 5m0s for pod "downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26" in namespace "downward-api-6827" to be "Succeeded or Failed"

    Sep 12 20:53:59.269: INFO: Pod "downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427987ms
    Sep 12 20:54:01.275: INFO: Pod "downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009422541s
    STEP: Saw pod success
    Sep 12 20:54:01.275: INFO: Pod "downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26" satisfied condition "Succeeded or Failed"

    Sep 12 20:54:01.278: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 20:54:01.298: INFO: Waiting for pod downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26 to disappear
    Sep 12 20:54:01.301: INFO: Pod downward-api-bb8fc500-6e43-49e6-8344-1293481d1a26 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:54:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6827" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":483,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 88 lines ...
    STEP: Destroying namespace "services-4781" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":556,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-g6nv
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 12 20:54:40.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g6nv" in namespace "subpath-8493" to be "Succeeded or Failed"

    Sep 12 20:54:40.780: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02034ms
    Sep 12 20:54:42.788: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 2.017570889s
    Sep 12 20:54:44.794: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 4.024370659s
    Sep 12 20:54:46.801: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 6.030725814s
    Sep 12 20:54:48.808: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 8.038189488s
    Sep 12 20:54:50.814: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 10.044186203s
    Sep 12 20:54:52.821: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 12.050861781s
    Sep 12 20:54:54.828: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 14.05781482s
    Sep 12 20:54:56.836: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 16.065586992s
    Sep 12 20:54:58.845: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 18.074764356s
    Sep 12 20:55:00.854: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Running", Reason="", readiness=true. Elapsed: 20.083904191s
    Sep 12 20:55:02.863: INFO: Pod "pod-subpath-test-projected-g6nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.093291172s
    STEP: Saw pod success
    Sep 12 20:55:02.864: INFO: Pod "pod-subpath-test-projected-g6nv" satisfied condition "Succeeded or Failed"

    Sep 12 20:55:02.870: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-subpath-test-projected-g6nv container test-container-subpath-projected-g6nv: <nil>
    STEP: delete the pod
    Sep 12 20:55:02.903: INFO: Waiting for pod pod-subpath-test-projected-g6nv to disappear
    Sep 12 20:55:02.908: INFO: Pod pod-subpath-test-projected-g6nv no longer exists
    STEP: Deleting pod pod-subpath-test-projected-g6nv
    Sep 12 20:55:02.908: INFO: Deleting pod "pod-subpath-test-projected-g6nv" in namespace "subpath-8493"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 20:55:02.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-8493" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":560,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 160 lines ...
    Sep 12 20:52:50.625: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep 12 20:52:50.625: INFO: Waiting for all frontend pods to be Running.
    Sep 12 20:52:55.676: INFO: Waiting for frontend to serve content.
    Sep 12 20:52:55.686: INFO: Trying to add a new entry to the guestbook.
    Sep 12 20:52:55.694: INFO: Verifying that added entry can be retrieved.
    Sep 12 20:52:55.704: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}

    Sep 12 20:56:34.834: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:59422->192.168.2.22:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep 12 20:56:39.835: FAIL: Entry to guestbook wasn't correctly added in 180 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372 +0x159
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 42 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 12 20:56:39.835: Entry to guestbook wasn't correctly added in 180 seconds.
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":30,"skipped":502,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:54:17.474: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 138 lines ...
    Sep 12 20:56:26.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-946 exec execpod-affinitymhqrb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep 12 20:56:28.837: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
    Sep 12 20:56:28.837: INFO: stdout: ""
    Sep 12 20:56:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-946 exec execpod-affinitymhqrb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep 12 20:56:31.199: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
    Sep 12 20:56:31.199: INFO: stdout: ""
    Sep 12 20:56:31.200: FAIL: Unexpected error:

        <*errors.errorString | 0xc0023445f0>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [150.206 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 20:56:31.200: Unexpected error:

          <*errors.errorString | 0xc0023445f0>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
      occurred
    
... skipping 4 lines ...
    STEP: Creating a kubernetes client
    Sep 12 20:55:03.031: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep 12 20:57:03.639: INFO: Successfully updated pod "var-expansion-191c56f5-5e9e-42d4-abf0-c27d99bde105"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep 12 20:57:05.652: INFO: Deleting pod "var-expansion-191c56f5-5e9e-42d4-abf0-c27d99bde105" in namespace "var-expansion-5044"
    Sep 12 20:57:05.661: INFO: Wait up to 5m0s for pod "var-expansion-191c56f5-5e9e-42d4-abf0-c27d99bde105" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:158.660 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":32,"skipped":595,"failed":0}

    
    SSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":502,"failed":3,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:56:47.688: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 139 lines ...
    Sep 12 20:58:58.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9320 exec execpod-affinity6snsx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep 12 20:59:01.090: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
    Sep 12 20:59:01.091: INFO: stdout: ""
    Sep 12 20:59:01.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9320 exec execpod-affinity6snsx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep 12 20:59:03.416: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
    Sep 12 20:59:03.416: INFO: stdout: ""
    Sep 12 20:59:03.417: FAIL: Unexpected error:

        <*errors.errorString | 0xc002344890>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [151.533 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 20:59:03.417: Unexpected error:

          <*errors.errorString | 0xc002344890>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":502,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:59:19.228: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 65 lines ...
    STEP: Destroying namespace "services-5523" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":502,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":69,"skipped":1225,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 20:56:41.386: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 157 lines ...
    Sep 12 20:56:45.579: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep 12 20:56:45.579: INFO: Waiting for all frontend pods to be Running.
    Sep 12 20:56:50.630: INFO: Waiting for frontend to serve content.
    Sep 12 20:56:50.650: INFO: Trying to add a new entry to the guestbook.
    Sep 12 20:56:50.667: INFO: Verifying that added entry can be retrieved.
    Sep 12 21:00:24.210: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:37252->192.168.2.26:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep 12 21:00:29.211: FAIL: Entry to guestbook wasn't correctly added in 180 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372 +0x159
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 84 lines ...
    STEP: Destroying namespace "services-8029" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":32,"skipped":504,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:00:35.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4033" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":33,"skipped":528,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:00:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1332" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":34,"skipped":540,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:01:46.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-9614" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":562,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:01:54.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1217" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":563,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:01:54.755: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep 12 21:01:54.796: INFO: Waiting up to 5m0s for pod "var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e" in namespace "var-expansion-6659" to be "Succeeded or Failed"

    Sep 12 21:01:54.799: INFO: Pod "var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767025ms
    Sep 12 21:01:56.804: INFO: Pod "var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007365027s
    STEP: Saw pod success
    Sep 12 21:01:56.804: INFO: Pod "var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e" satisfied condition "Succeeded or Failed"

    Sep 12 21:01:56.807: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:01:56.839: INFO: Waiting for pod var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e to disappear
    Sep 12 21:01:56.846: INFO: Pod var-expansion-cfb79ab1-79d0-4c96-92df-297d9f255c6e no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:01:56.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6659" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":37,"skipped":604,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-qhx4
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 12 21:01:56.916: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qhx4" in namespace "subpath-1410" to be "Succeeded or Failed"

    Sep 12 21:01:56.919: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.053019ms
    Sep 12 21:01:58.925: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 2.008724339s
    Sep 12 21:02:00.929: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 4.013240282s
    Sep 12 21:02:02.934: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 6.017658947s
    Sep 12 21:02:04.938: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 8.022082708s
    Sep 12 21:02:06.942: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 10.026230042s
    Sep 12 21:02:08.948: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 12.032135544s
    Sep 12 21:02:10.953: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 14.037519274s
    Sep 12 21:02:12.958: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 16.042122026s
    Sep 12 21:02:14.964: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 18.047771828s
    Sep 12 21:02:16.969: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Running", Reason="", readiness=true. Elapsed: 20.053210781s
    Sep 12 21:02:18.974: INFO: Pod "pod-subpath-test-configmap-qhx4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.058550835s
    STEP: Saw pod success
    Sep 12 21:02:18.975: INFO: Pod "pod-subpath-test-configmap-qhx4" satisfied condition "Succeeded or Failed"

    Sep 12 21:02:18.978: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-subpath-test-configmap-qhx4 container test-container-subpath-configmap-qhx4: <nil>
    STEP: delete the pod
    Sep 12 21:02:18.997: INFO: Waiting for pod pod-subpath-test-configmap-qhx4 to disappear
    Sep 12 21:02:19.000: INFO: Pod pod-subpath-test-configmap-qhx4 no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-qhx4
    Sep 12 21:02:19.001: INFO: Deleting pod "pod-subpath-test-configmap-qhx4" in namespace "subpath-1410"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:19.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1410" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":614,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:02:19.041: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 12 21:02:19.081: INFO: Waiting up to 5m0s for pod "pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd" in namespace "emptydir-8225" to be "Succeeded or Failed"

    Sep 12 21:02:19.084: INFO: Pod "pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.819763ms
    Sep 12 21:02:21.089: INFO: Pod "pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008360875s
    STEP: Saw pod success
    Sep 12 21:02:21.089: INFO: Pod "pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd" satisfied condition "Succeeded or Failed"

    Sep 12 21:02:21.092: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:02:21.109: INFO: Waiting for pod pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd to disappear
    Sep 12 21:02:21.112: INFO: Pod pod-557ac6d2-8beb-4a77-b904-c1b0341c38bd no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:21.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8225" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":631,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:02:21.132: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-2879d977-9b66-40b1-b744-5d284d68e625
    STEP: Creating a pod to test consume secrets
    Sep 12 21:02:21.179: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3" in namespace "projected-1667" to be "Succeeded or Failed"

    Sep 12 21:02:21.182: INFO: Pod "pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940793ms
    Sep 12 21:02:23.186: INFO: Pod "pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007078743s
    STEP: Saw pod success
    Sep 12 21:02:23.186: INFO: Pod "pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3" satisfied condition "Succeeded or Failed"

    Sep 12 21:02:23.189: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:02:23.211: INFO: Waiting for pod pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3 to disappear
    Sep 12 21:02:23.213: INFO: Pod pod-projected-secrets-91af5d4f-0e1e-4965-97ac-3ada57a973f3 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:23.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":638,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:29.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5179" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":41,"skipped":646,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:02:29.351: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep 12 21:02:29.390: INFO: Waiting up to 5m0s for pod "var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496" in namespace "var-expansion-8156" to be "Succeeded or Failed"

    Sep 12 21:02:29.396: INFO: Pod "var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496": Phase="Pending", Reason="", readiness=false. Elapsed: 5.894188ms
    Sep 12 21:02:31.402: INFO: Pod "var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011663585s
    STEP: Saw pod success
    Sep 12 21:02:31.402: INFO: Pod "var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496" satisfied condition "Succeeded or Failed"

    Sep 12 21:02:31.405: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:02:31.422: INFO: Waiting for pod var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496 to disappear
    Sep 12 21:02:31.425: INFO: Pod var-expansion-91b0a9eb-7222-4b18-afa1-5390c7f05496 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":679,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:02:31.442: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep 12 21:02:31.487: INFO: Waiting up to 5m0s for pod "var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad" in namespace "var-expansion-2982" to be "Succeeded or Failed"

    Sep 12 21:02:31.491: INFO: Pod "var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.26233ms
    Sep 12 21:02:33.495: INFO: Pod "var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007295999s
    STEP: Saw pod success
    Sep 12 21:02:33.495: INFO: Pod "var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad" satisfied condition "Succeeded or Failed"

    Sep 12 21:02:33.497: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov pod var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:02:33.527: INFO: Waiting for pod var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad to disappear
    Sep 12 21:02:33.529: INFO: Pod var-expansion-3b58af87-0de8-42a1-85cf-c93c23cd20ad no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:02:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2982" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":683,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:300.083 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":33,"skipped":600,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
    STEP: Creating a validating webhook configuration
    Sep 12 21:02:47.004: INFO: Waiting for webhook configuration to be ready...
    Sep 12 21:02:57.117: INFO: Waiting for webhook configuration to be ready...
    Sep 12 21:03:07.217: INFO: Waiting for webhook configuration to be ready...
    Sep 12 21:03:17.316: INFO: Waiting for webhook configuration to be ready...
    Sep 12 21:03:27.326: INFO: Waiting for webhook configuration to be ready...
    Sep 12 21:03:27.327: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000242290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      patching/updating a validating webhook should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 21:03:27.327: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000242290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":43,"skipped":705,"failed":5,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:03:27.391: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-7686-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":44,"skipped":705,"failed":5,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
    STEP: Destroying namespace "services-7734" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":45,"skipped":720,"failed":5,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":69,"skipped":1225,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:00:30.248: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 156 lines ...
    Sep 12 21:00:32.847: INFO: stderr: ""
    Sep 12 21:00:32.847: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep 12 21:00:32.847: INFO: Waiting for all frontend pods to be Running.
    Sep 12 21:00:37.900: INFO: Waiting for frontend to serve content.
    Sep 12 21:00:37.910: INFO: Trying to add a new entry to the guestbook.
    Sep 12 21:04:11.537: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:55314->192.168.2.30:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep 12 21:04:16.539: FAIL: Cannot added new entry in 180 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372 +0x159
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 42 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 12 21:04:16.539: Cannot added new entry in 180 seconds.
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":69,"skipped":1225,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:04:17.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5705" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":70,"skipped":1232,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:04:39.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-675" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1345,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:04:40.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-3045" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":72,"skipped":1366,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:04:40.592: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 12 21:04:40.645: INFO: Waiting up to 5m0s for pod "security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662" in namespace "security-context-6846" to be "Succeeded or Failed"

    Sep 12 21:04:40.649: INFO: Pod "security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296114ms
    Sep 12 21:04:42.653: INFO: Pod "security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007998171s
    STEP: Saw pod success
    Sep 12 21:04:42.654: INFO: Pod "security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662" satisfied condition "Succeeded or Failed"

    Sep 12 21:04:42.657: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov pod security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662 container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:04:42.677: INFO: Waiting for pod security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662 to disappear
    Sep 12 21:04:42.679: INFO: Pod security-context-2bea4b0e-77b1-4205-9c3c-92a67e882662 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:04:42.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6846" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":73,"skipped":1430,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":34,"skipped":627,"failed":0}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:02:45.390: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    • [SLOW TEST:152.470 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":627,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8354" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":672,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:05:22.606: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep 12 21:05:22.645: INFO: Waiting up to 5m0s for pod "pod-da446a8d-3e58-4590-bb1d-9020c028c361" in namespace "emptydir-7205" to be "Succeeded or Failed"

    Sep 12 21:05:22.647: INFO: Pod "pod-da446a8d-3e58-4590-bb1d-9020c028c361": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509511ms
    Sep 12 21:05:24.652: INFO: Pod "pod-da446a8d-3e58-4590-bb1d-9020c028c361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007321848s
    STEP: Saw pod success
    Sep 12 21:05:24.652: INFO: Pod "pod-da446a8d-3e58-4590-bb1d-9020c028c361" satisfied condition "Succeeded or Failed"

    Sep 12 21:05:24.655: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-da446a8d-3e58-4590-bb1d-9020c028c361 container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:05:24.677: INFO: Waiting for pod pod-da446a8d-3e58-4590-bb1d-9020c028c361 to disappear
    Sep 12 21:05:24.681: INFO: Pod pod-da446a8d-3e58-4590-bb1d-9020c028c361 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:24.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7205" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":681,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
    Sep 12 21:05:28.834: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-2669  2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5 12672 3 2022-09-12 21:05:24 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 38e2e07f-8891-4825-98cd-a433f9c1fa18 0xc000c85e37 0xc000c85e38}] []  [{kube-controller-manager Update apps/v1 2022-09-12 21:05:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38e2e07f-8891-4825-98cd-a433f9c1fa18\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000c85ea8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep 12 21:05:28.849: INFO: Pod "webserver-deployment-795d758f88-724kz" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-724kz webserver-deployment-795d758f88- deployment-2669  318752f1-d32d-4b6d-bdc9-fb37f5654e1e 12685 0 2022-09-12 21:05:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc00052eb47 0xc00052eb48}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f9f7z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9f7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-worker-938c6l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 12 21:05:28.849: INFO: Pod "webserver-deployment-795d758f88-9zq5c" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-9zq5c webserver-deployment-795d758f88- deployment-2669  5a2d8f28-a157-4d02-84e3-16a1731d8018 12598 0 2022-09-12 21:05:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc00052f020 0xc00052f021}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jr55w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jr55w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2022-09-12 21:05:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 12 21:05:28.849: INFO: Pod "webserver-deployment-795d758f88-g4brs" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-g4brs webserver-deployment-795d758f88- deployment-2669  aa65222f-62d5-4deb-b1d5-8986b2164087 12668 0 2022-09-12 21:05:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc00052f580 0xc00052f581}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gmlw4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmlw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-worker-938c6l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.68,StartTime:2022-09-12 21:05:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 12 21:05:28.850: INFO: Pod "webserver-deployment-795d758f88-m8bvn" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-m8bvn webserver-deployment-795d758f88- deployment-2669  f1fb9e69-83fc-4451-a586-e8a0f3e8ac54 12654 0 2022-09-12 21:05:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc00052f940 0xc00052f941}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qhcbs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.63,StartTime:2022-09-12 21:05:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 12 21:05:28.850: INFO: Pod "webserver-deployment-795d758f88-mkhj5" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-mkhj5 webserver-deployment-795d758f88- deployment-2669  d078eb05-775c-43c3-8b12-bf81ec1d6f96 12660 0 2022-09-12 21:05:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc003d8e020 0xc003d8e021}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l98k8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l98k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.40,StartTime:2022-09-12 21:05:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 12 21:05:28.851: INFO: Pod "webserver-deployment-795d758f88-t42s6" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-t42s6 webserver-deployment-795d758f88- deployment-2669  81bf35b4-68f7-448b-994b-80f76b4c2385 12662 0 2022-09-12 21:05:26 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2ccaaf05-d634-4502-8db7-4e7e71070064 0xc003d8e220 0xc003d8e221}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ccaaf05-d634-4502-8db7-4e7e71070064\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5628g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5628g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.80,StartTime:2022-09-12 21:05:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 12 21:05:28.851: INFO: Pod "webserver-deployment-847dcfb7fb-72ctn" is not available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-72ctn webserver-deployment-847dcfb7fb- deployment-2669  ebc9f677-f027-4d65-b113-31c8538d0c1e 12680 0 2022-09-12 21:05:28 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5 0xc003d8e420 0xc003d8e421}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-djmvn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djmvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 12 21:05:28.851: INFO: Pod "webserver-deployment-847dcfb7fb-9btz2" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9btz2 webserver-deployment-847dcfb7fb- deployment-2669  c88691b0-c893-44c2-9926-778f70ae9f3a 12565 0 2022-09-12 21:05:24 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5 0xc003d8e557 0xc003d8e558}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9nl4z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9nl4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-worker-938c6l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.66,StartTime:2022-09-12 21:05:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-12 21:05:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6602f5f799f61fb9c5c3ce482a720b815f3e6cfd1ab866eae62b41d0e58766d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 12 21:05:28.851: INFO: Pod "webserver-deployment-847dcfb7fb-b979p" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-b979p webserver-deployment-847dcfb7fb- deployment-2669  b01e51ba-5979-4850-a90d-e8134601d023 12571 0 2022-09-12 21:05:24 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5 0xc003d8e740 0xc003d8e741}] []  [{kube-controller-manager Update v1 2022-09-12 21:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e0368c3-a1fe-49c8-94a2-fc9181ab5fb5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-12 21:05:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pzrmt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzrmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-6izh7i-worker-938c6l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-12 21:05:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.65,StartTime:2022-09-12 21:05:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-12 21:05:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://618da262a291329cfda66c058f497804301e4e65b3fb73e4d769e017523cee6b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:28.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":38,"skipped":689,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 419 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:40.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-4119" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":39,"skipped":704,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:42.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7377" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":40,"skipped":717,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:45.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-6132" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":74,"skipped":1463,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:05:46.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813" in namespace "downward-api-793" to be "Succeeded or Failed"

    Sep 12 21:05:46.098: INFO: Pod "downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813": Phase="Pending", Reason="", readiness=false. Elapsed: 6.52196ms
    Sep 12 21:05:48.102: INFO: Pod "downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01049252s
    STEP: Saw pod success
    Sep 12 21:05:48.102: INFO: Pod "downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813" satisfied condition "Succeeded or Failed"

    Sep 12 21:05:48.109: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813 container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:05:48.138: INFO: Waiting for pod downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813 to disappear
    Sep 12 21:05:48.141: INFO: Pod downwardapi-volume-14e46eff-bca6-45b8-bead-8ead8cc75813 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-793" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1465,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:49.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7161" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":41,"skipped":727,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:05:48.195: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-533d9a8c-ec9a-4475-accc-1b9d6ef6c441
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:05:48.244: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468" in namespace "projected-7915" to be "Succeeded or Failed"

    Sep 12 21:05:48.247: INFO: Pod "pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.903834ms
    Sep 12 21:05:50.257: INFO: Pod "pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012995965s
    STEP: Saw pod success
    Sep 12 21:05:50.257: INFO: Pod "pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468" satisfied condition "Succeeded or Failed"

    Sep 12 21:05:50.265: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:05:50.286: INFO: Waiting for pod pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468 to disappear
    Sep 12 21:05:50.290: INFO: Pod pod-projected-configmaps-7c445951-9641-47d1-9ccd-b5ee56e93468 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7915" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1481,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-24649caa-b7d7-4e7b-8ff1-f50339ec550e
    STEP: Creating secret with name secret-projected-all-test-volume-6da1f509-7942-436e-97db-44c01b77b5ba
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep 12 21:05:49.207: INFO: Waiting up to 5m0s for pod "projected-volume-5e54cc2f-1568-4780-9780-fa4006345496" in namespace "projected-4818" to be "Succeeded or Failed"

    Sep 12 21:05:49.212: INFO: Pod "projected-volume-5e54cc2f-1568-4780-9780-fa4006345496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.933538ms
    Sep 12 21:05:51.216: INFO: Pod "projected-volume-5e54cc2f-1568-4780-9780-fa4006345496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008871874s
    STEP: Saw pod success
    Sep 12 21:05:51.216: INFO: Pod "projected-volume-5e54cc2f-1568-4780-9780-fa4006345496" satisfied condition "Succeeded or Failed"

    Sep 12 21:05:51.219: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod projected-volume-5e54cc2f-1568-4780-9780-fa4006345496 container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:05:51.232: INFO: Waiting for pod projected-volume-5e54cc2f-1568-4780-9780-fa4006345496 to disappear
    Sep 12 21:05:51.235: INFO: Pod projected-volume-5e54cc2f-1568-4780-9780-fa4006345496 no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4818" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":732,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:05:57.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-709" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":739,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep 12 21:06:01.550: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:06:13.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-9779" for this suite.
    STEP: Destroying namespace "webhook-9779-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":44,"skipped":757,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:06:13.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5958" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":45,"skipped":760,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:06:20.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-897" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":77,"skipped":1529,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
    STEP: Destroying namespace "services-1055" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":78,"skipped":1546,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-998-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":79,"skipped":1554,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:06:30.662: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep 12 21:06:30.714: INFO: Waiting up to 5m0s for pod "pod-4677c455-746f-497c-ad06-883dde78997d" in namespace "emptydir-560" to be "Succeeded or Failed"

    Sep 12 21:06:30.719: INFO: Pod "pod-4677c455-746f-497c-ad06-883dde78997d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.178723ms
    Sep 12 21:06:32.724: INFO: Pod "pod-4677c455-746f-497c-ad06-883dde78997d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009774668s
    STEP: Saw pod success
    Sep 12 21:06:32.724: INFO: Pod "pod-4677c455-746f-497c-ad06-883dde78997d" satisfied condition "Succeeded or Failed"

    Sep 12 21:06:32.727: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-4677c455-746f-497c-ad06-883dde78997d container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:06:32.742: INFO: Waiting for pod pod-4677c455-746f-497c-ad06-883dde78997d to disappear
    Sep 12 21:06:32.745: INFO: Pod pod-4677c455-746f-497c-ad06-883dde78997d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:06:32.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1555,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:07:48.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2622" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":761,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:07:50.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1594" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":47,"skipped":775,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:07:53.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8636" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":807,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:08:45.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-2520" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":814,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:08:45.218: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 12 21:08:45.259: INFO: Waiting up to 5m0s for pod "downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520" in namespace "downward-api-1487" to be "Succeeded or Failed"

    Sep 12 21:08:45.263: INFO: Pod "downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520": Phase="Pending", Reason="", readiness=false. Elapsed: 3.512029ms
    Sep 12 21:08:47.268: INFO: Pod "downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009142142s
    STEP: Saw pod success
    Sep 12 21:08:47.268: INFO: Pod "downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520" satisfied condition "Succeeded or Failed"

    Sep 12 21:08:47.272: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:08:47.292: INFO: Waiting for pod downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520 to disappear
    Sep 12 21:08:47.295: INFO: Pod downward-api-ed160f2e-71ac-4244-8639-abc6e0ff8520 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:08:47.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1487" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":820,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:08:47.328: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-a4fa1b3b-2747-4c7d-82e0-2f2aec1dd033
    STEP: Creating a pod to test consume secrets
    Sep 12 21:08:47.374: INFO: Waiting up to 5m0s for pod "pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f" in namespace "secrets-5779" to be "Succeeded or Failed"

    Sep 12 21:08:47.378: INFO: Pod "pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.397192ms
    Sep 12 21:08:49.382: INFO: Pod "pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008170893s
    STEP: Saw pod success
    Sep 12 21:08:49.382: INFO: Pod "pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f" satisfied condition "Succeeded or Failed"

    Sep 12 21:08:49.386: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:08:49.400: INFO: Waiting for pod pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f to disappear
    Sep 12 21:08:49.403: INFO: Pod pod-secrets-c0485749-48e8-46f7-b13a-4742e8d8492f no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:08:49.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":829,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:08:51.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":52,"skipped":858,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:08:57.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-6175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":53,"skipped":878,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep 12 21:09:03.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5755 explain e2e-test-crd-publish-openapi-9240-crds.spec'
    Sep 12 21:09:03.732: INFO: stderr: ""
    Sep 12 21:09:03.732: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep 12 21:09:03.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5755 explain e2e-test-crd-publish-openapi-9240-crds.spec.bars'
    Sep 12 21:09:03.966: INFO: stderr: ""
    Sep 12 21:09:03.966: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep 12 21:09:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5755 explain e2e-test-crd-publish-openapi-9240-crds.spec.bars2'
    Sep 12 21:09:04.211: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:09:06.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5755" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":54,"skipped":887,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "webhook-9089-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":55,"skipped":892,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:09:10.389: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 12 21:09:10.457: INFO: Waiting up to 5m0s for pod "pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d" in namespace "emptydir-8602" to be "Succeeded or Failed"

    Sep 12 21:09:10.463: INFO: Pod "pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.773141ms
    Sep 12 21:09:12.468: INFO: Pod "pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009258442s
    STEP: Saw pod success
    Sep 12 21:09:12.468: INFO: Pod "pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d" satisfied condition "Succeeded or Failed"

    Sep 12 21:09:12.471: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:09:12.486: INFO: Waiting for pod pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d to disappear
    Sep 12 21:09:12.488: INFO: Pod pod-8afb1cb9-bfe4-4f18-9cbe-8dd674c4f87d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:09:12.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8602" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":897,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:09:12.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb" in namespace "projected-9068" to be "Succeeded or Failed"

    Sep 12 21:09:12.570: INFO: Pod "downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511408ms
    Sep 12 21:09:14.575: INFO: Pod "downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008696776s
    STEP: Saw pod success
    Sep 12 21:09:14.575: INFO: Pod "downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb" satisfied condition "Succeeded or Failed"

    Sep 12 21:09:14.578: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:09:14.590: INFO: Waiting for pod downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb to disappear
    Sep 12 21:09:14.593: INFO: Pod downwardapi-volume-2e40bd3c-0e65-47c9-9322-ee4eeaf8dfcb no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:09:14.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9068" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":914,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:09:14.603: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-f38a6004-d97b-4fe3-a840-05be7523130e
    STEP: Creating a pod to test consume secrets
    Sep 12 21:09:14.647: INFO: Waiting up to 5m0s for pod "pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965" in namespace "secrets-7920" to be "Succeeded or Failed"

    Sep 12 21:09:14.650: INFO: Pod "pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.922292ms
    Sep 12 21:09:16.654: INFO: Pod "pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007290778s
    STEP: Saw pod success
    Sep 12 21:09:16.654: INFO: Pod "pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965" satisfied condition "Succeeded or Failed"

    Sep 12 21:09:16.657: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:09:16.673: INFO: Waiting for pod pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965 to disappear
    Sep 12 21:09:16.676: INFO: Pod pod-secrets-bbe959fd-f8f3-40bc-912e-c548a0859965 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:09:16.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7920" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":915,"failed":0}

    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:09:16.686: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 12 21:09:16.717: INFO: PodSpec: initContainers in spec.initContainers
    Sep 12 21:10:02.511: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a1675559-fae7-4419-8a5f-8e7405fc2296", GenerateName:"", Namespace:"init-container-5711", SelfLink:"", UID:"02f6d723-5186-4fcf-8159-731359a0ee70", ResourceVersion:"16180", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63798613756, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"717080082"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003128d50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003128d68)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003128d80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003128db0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-tglrd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002e2b220), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-tglrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-tglrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-tglrd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00415d1d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-6izh7i-worker-938c6l", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002811730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00415d250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00415d270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00415d278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00415d27c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0045000a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798613756, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798613756, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798613756, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798613756, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"192.168.6.87", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.6.87"}}, StartTime:(*v1.Time)(0xc003128de0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002811810)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002811880)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://3fd9082261b147a41b54b48b64bc5dbc0f3a4443b2f7e337e5dae9a3c79547f4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e2b2e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e2b2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00415d2ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:02.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5711" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":59,"skipped":915,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:11.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6229" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":60,"skipped":929,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-5245-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":61,"skipped":931,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-2477-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":62,"skipped":934,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:10:19.810: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 12 21:10:19.853: INFO: Waiting up to 5m0s for pod "security-context-47c3307a-f802-47d2-9554-9aa1e653965c" in namespace "security-context-6511" to be "Succeeded or Failed"

    Sep 12 21:10:19.856: INFO: Pod "security-context-47c3307a-f802-47d2-9554-9aa1e653965c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555292ms
    Sep 12 21:10:21.861: INFO: Pod "security-context-47c3307a-f802-47d2-9554-9aa1e653965c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007920328s
    STEP: Saw pod success
    Sep 12 21:10:21.861: INFO: Pod "security-context-47c3307a-f802-47d2-9554-9aa1e653965c" satisfied condition "Succeeded or Failed"

    Sep 12 21:10:21.865: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod security-context-47c3307a-f802-47d2-9554-9aa1e653965c container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:10:21.887: INFO: Waiting for pod security-context-47c3307a-f802-47d2-9554-9aa1e653965c to disappear
    Sep 12 21:10:21.890: INFO: Pod security-context-47c3307a-f802-47d2-9554-9aa1e653965c no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:21.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6511" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":63,"skipped":966,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:24.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2742" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":64,"skipped":969,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:24.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1237" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":65,"skipped":975,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:26.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8273" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":66,"skipped":980,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:10:26.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4999" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":67,"skipped":981,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 38 lines ...
    Sep 12 21:03:55.157: INFO: stderr: ""
    Sep 12 21:03:55.157: INFO: stdout: "true"
    Sep 12 21:03:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2613 get pods update-demo-nautilus-pvtg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 12 21:03:55.248: INFO: stderr: ""
    Sep 12 21:03:55.248: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 12 21:03:55.248: INFO: validating pod update-demo-nautilus-pvtg8
    Sep 12 21:07:28.145: INFO: update-demo-nautilus-pvtg8 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-pvtg8)

    Sep 12 21:07:33.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2613 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 12 21:07:33.478: INFO: stderr: ""
    Sep 12 21:07:33.478: INFO: stdout: "update-demo-nautilus-nrzcm update-demo-nautilus-pvtg8 "
    Sep 12 21:07:33.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2613 get pods update-demo-nautilus-nrzcm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 12 21:07:33.577: INFO: stderr: ""
    Sep 12 21:07:33.577: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 12 21:07:33.780: INFO: stderr: ""
    Sep 12 21:07:33.780: INFO: stdout: "true"
    Sep 12 21:07:33.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2613 get pods update-demo-nautilus-pvtg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 12 21:07:33.878: INFO: stderr: ""
    Sep 12 21:07:33.878: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 12 21:07:33.878: INFO: validating pod update-demo-nautilus-pvtg8
    Sep 12 21:11:07.281: INFO: update-demo-nautilus-pvtg8 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-pvtg8)

    Sep 12 21:11:12.282: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0036a3800)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 323 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  29s   default-scheduler  Successfully assigned pod-network-test-6042/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep 12 21:07:02.739: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.88:9080/dial?request=hostname&protocol=http&host=192.168.2.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:07:02.739: INFO: ...failed...will try again in next pass

    Sep 12 21:07:02.739: INFO: Going to retry 1 out of 4 pods....
    Sep 12 21:07:02.739: INFO: Doublechecking 1 pods in host 172.18.0.7 which werent seen the first time.
    Sep 12 21:07:02.739: INFO: Now attempting to probe pod [[[ 192.168.2.41 ]]]
    Sep 12 21:07:02.743: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.88:9080/dial?request=hostname&protocol=http&host=192.168.2.41&port=8080&tries=1'] Namespace:pod-network-test-6042 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 12 21:07:02.743: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 12 21:07:07.830: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 377 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m56s  default-scheduler  Successfully assigned pod-network-test-6042/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     5m56s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m56s  kubelet            Created container webserver
      Normal  Started    5m56s  kubelet            Started container webserver
    
    Sep 12 21:12:29.400: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.88:9080/dial?request=hostname&protocol=http&host=192.168.2.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:12:29.400: INFO: ... Done probing pod [[[ 192.168.2.41 ]]]
    Sep 12 21:12:29.400: INFO: succeeded at polling 3 out of 4 connections
    Sep 12 21:12:29.400: INFO: pod polling failure summary:
    Sep 12 21:12:29.400: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.88:9080/dial?request=hostname&protocol=http&host=192.168.2.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep 12 21:12:29.400: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 12 21:12:29.400: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 76 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform rolling updates and roll backs of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":68,"skipped":1031,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:13:43.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6761" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1032,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:01.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-4894" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":70,"skipped":1049,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:02.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-9784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":71,"skipped":1065,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:18.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-1388" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":72,"skipped":1069,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:34.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3931" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":73,"skipped":1085,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:15:34.685: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-fd33c11e-f5ec-40cf-8b30-aee3da8a6adb
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:15:34.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810" in namespace "configmap-7650" to be "Succeeded or Failed"

    Sep 12 21:15:34.727: INFO: Pod "pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81602ms
    Sep 12 21:15:36.731: INFO: Pod "pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007142148s
    STEP: Saw pod success
    Sep 12 21:15:36.731: INFO: Pod "pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810" satisfied condition "Succeeded or Failed"

    Sep 12 21:15:36.734: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:15:36.757: INFO: Waiting for pod pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810 to disappear
    Sep 12 21:15:36.759: INFO: Pod pod-configmaps-d8fb24ce-ad01-4967-b73b-d887f1658810 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:36.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7650" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1086,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-1065-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":75,"skipped":1115,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:15:51.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2568" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":76,"skipped":1139,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:16:05.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-9968" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":77,"skipped":1140,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:16:15.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-2863" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":78,"skipped":1144,"failed":0}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:16:15.141: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-645f6a07-745a-4273-902d-2edc17288db5
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:16:15.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036" in namespace "projected-7766" to be "Succeeded or Failed"

    Sep 12 21:16:15.196: INFO: Pod "pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036": Phase="Pending", Reason="", readiness=false. Elapsed: 3.946001ms
    Sep 12 21:16:17.201: INFO: Pod "pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00912237s
    STEP: Saw pod success
    Sep 12 21:16:17.201: INFO: Pod "pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036" satisfied condition "Succeeded or Failed"

    Sep 12 21:16:17.204: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036 container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:16:17.229: INFO: Waiting for pod pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036 to disappear
    Sep 12 21:16:17.232: INFO: Pod pod-projected-configmaps-82efcdb9-c7b7-47fd-b3a8-cc131b1c3036 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:16:17.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1144,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:17:17.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1577" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":80,"skipped":1158,"failed":0}

    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:17:17.353: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 12 21:17:17.400: INFO: Waiting up to 5m0s for pod "downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8" in namespace "downward-api-5893" to be "Succeeded or Failed"

    Sep 12 21:17:17.404: INFO: Pod "downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438538ms
    Sep 12 21:17:19.409: INFO: Pod "downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008741916s
    STEP: Saw pod success
    Sep 12 21:17:19.409: INFO: Pod "downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8" satisfied condition "Succeeded or Failed"

    Sep 12 21:17:19.412: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8 container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:17:19.428: INFO: Waiting for pod downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8 to disappear
    Sep 12 21:17:19.430: INFO: Pod downward-api-76effdf3-063a-45a0-85be-c53bf11ff9a8 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:17:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5893" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1158,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:17:21.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-2020" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1194,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:02.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7087" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":83,"skipped":1195,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 12 21:18:02.928: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 12 21:18:05.955: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:06.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-5582" for this suite.
    STEP: Destroying namespace "webhook-5582-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":84,"skipped":1214,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 107 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:11.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7835" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":85,"skipped":1217,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:17.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":86,"skipped":1229,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1560,"failed":4,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:12:29.420: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 164 lines ...
    
    Sep 12 21:12:59.127: INFO: 
    Output of kubectl describe pod pod-network-test-1328/netserver-2:
    
    Sep 12 21:12:59.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1328 describe pod netserver-2 --namespace=pod-network-test-1328'
    Sep 12 21:12:59.233: INFO: stderr: ""
    Sep 12 21:12:59.233: INFO: stdout: "Name:         netserver-2\nNamespace:    pod-network-test-1328\nPriority:     0\nNode:         k8s-upgrade-and-conformance-6izh7i-worker-938c6l/172.18.0.5\nStart Time:   Mon, 12 Sep 2022 21:12:29 +0000\nLabels:       selector-87ce8ae6-62ff-4a63-b693-db4bc61b5490=true\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.6.99\nIPs:\n  IP:  192.168.6.99\nContainers:\n  webserver:\n    Container ID:  containerd://d3e72a84749fa907bec8f18347b8d547d4171441ad220a4084d5dba7b898a452\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Mon, 12 Sep 2022 21:12:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf4n9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-sf4n9:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=k8s-upgrade-and-conformance-6izh7i-worker-938c6l\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason       Age   From               Message\n  ----     ------       ----  ----               -------\n  Normal   Scheduled    29s   default-scheduler  Successfully assigned pod-network-test-1328/netserver-2 to k8s-upgrade-and-conformance-6izh7i-worker-938c6l\n  Warning  FailedMount  29s   kubelet            MountVolume.SetUp failed for volume \"kube-api-access-sf4n9\" : failed to sync configmap cache: timed out waiting for the condition\n  Normal   Pulled       28s   kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal   Created      28s   kubelet            Created container webserver\n  Normal   Started      28s   kubelet            Started container webserver\n"

    Sep 12 21:12:59.233: INFO: Name:         netserver-2
    Namespace:    pod-network-test-1328
    Priority:     0
    Node:         k8s-upgrade-and-conformance-6izh7i-worker-938c6l/172.18.0.5
    Start Time:   Mon, 12 Sep 2022 21:12:29 +0000
    Labels:       selector-87ce8ae6-62ff-4a63-b693-db4bc61b5490=true
... skipping 40 lines ...
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age   From               Message
      ----     ------       ----  ----               -------
      Normal   Scheduled    29s   default-scheduler  Successfully assigned pod-network-test-1328/netserver-2 to k8s-upgrade-and-conformance-6izh7i-worker-938c6l
      Warning  FailedMount  29s   kubelet            MountVolume.SetUp failed for volume "kube-api-access-sf4n9" : failed to sync configmap cache: timed out waiting for the condition

      Normal   Pulled       28s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal   Created      28s   kubelet            Created container webserver
      Normal   Started      28s   kubelet            Started container webserver
    
    Sep 12 21:12:59.233: INFO: 
    Output of kubectl describe pod pod-network-test-1328/netserver-3:
... skipping 54 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  29s   default-scheduler  Successfully assigned pod-network-test-1328/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep 12 21:12:59.341: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.75:9080/dial?request=hostname&protocol=http&host=192.168.2.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:12:59.341: INFO: ...failed...will try again in next pass

    Sep 12 21:12:59.341: INFO: Going to retry 1 out of 4 pods....
    Sep 12 21:12:59.341: INFO: Doublechecking 1 pods in host 172.18.0.7 which werent seen the first time.
    Sep 12 21:12:59.341: INFO: Now attempting to probe pod [[[ 192.168.2.44 ]]]
    Sep 12 21:12:59.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.75:9080/dial?request=hostname&protocol=http&host=192.168.2.44&port=8080&tries=1'] Namespace:pod-network-test-1328 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 12 21:12:59.345: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 12 21:13:04.423: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 258 lines ...
    
    Sep 12 21:18:25.828: INFO: 
    Output of kubectl describe pod pod-network-test-1328/netserver-2:
    
    Sep 12 21:18:25.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1328 describe pod netserver-2 --namespace=pod-network-test-1328'
    Sep 12 21:18:25.938: INFO: stderr: ""
    Sep 12 21:18:25.938: INFO: stdout: "Name:         netserver-2\nNamespace:    pod-network-test-1328\nPriority:     0\nNode:         k8s-upgrade-and-conformance-6izh7i-worker-938c6l/172.18.0.5\nStart Time:   Mon, 12 Sep 2022 21:12:29 +0000\nLabels:       selector-87ce8ae6-62ff-4a63-b693-db4bc61b5490=true\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.6.99\nIPs:\n  IP:  192.168.6.99\nContainers:\n  webserver:\n    Container ID:  containerd://d3e72a84749fa907bec8f18347b8d547d4171441ad220a4084d5dba7b898a452\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Mon, 12 Sep 2022 21:12:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf4n9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-sf4n9:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=k8s-upgrade-and-conformance-6izh7i-worker-938c6l\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason       Age    From               Message\n  ----     ------       ----   ----               -------\n  Normal   Scheduled    5m56s  default-scheduler  Successfully assigned pod-network-test-1328/netserver-2 to k8s-upgrade-and-conformance-6izh7i-worker-938c6l\n  Warning  FailedMount  5m55s  kubelet            MountVolume.SetUp failed for volume \"kube-api-access-sf4n9\" : failed to sync configmap cache: timed out waiting for the condition\n  Normal   Pulled       5m54s  kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal   Created      5m54s  kubelet            Created container webserver\n  Normal   Started      5m54s  kubelet            Started container webserver\n"

    Sep 12 21:18:25.938: INFO: Name:         netserver-2
    Namespace:    pod-network-test-1328
    Priority:     0
    Node:         k8s-upgrade-and-conformance-6izh7i-worker-938c6l/172.18.0.5
    Start Time:   Mon, 12 Sep 2022 21:12:29 +0000
    Labels:       selector-87ce8ae6-62ff-4a63-b693-db4bc61b5490=true
... skipping 40 lines ...
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age    From               Message
      ----     ------       ----   ----               -------
      Normal   Scheduled    5m56s  default-scheduler  Successfully assigned pod-network-test-1328/netserver-2 to k8s-upgrade-and-conformance-6izh7i-worker-938c6l
      Warning  FailedMount  5m55s  kubelet            MountVolume.SetUp failed for volume "kube-api-access-sf4n9" : failed to sync configmap cache: timed out waiting for the condition

      Normal   Pulled       5m54s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal   Created      5m54s  kubelet            Created container webserver
      Normal   Started      5m54s  kubelet            Started container webserver
    
    Sep 12 21:18:25.938: INFO: 
    Output of kubectl describe pod pod-network-test-1328/netserver-3:
... skipping 54 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m56s  default-scheduler  Successfully assigned pod-network-test-1328/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     5m56s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m56s  kubelet            Created container webserver
      Normal  Started    5m56s  kubelet            Started container webserver
    
    Sep 12 21:18:26.040: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.75:9080/dial?request=hostname&protocol=http&host=192.168.2.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:18:26.040: INFO: ... Done probing pod [[[ 192.168.2.44 ]]]
    Sep 12 21:18:26.040: INFO: succeeded at polling 3 out of 4 connections
    Sep 12 21:18:26.040: INFO: pod polling failure summary:
    Sep 12 21:18:26.040: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.75:9080/dial?request=hostname&protocol=http&host=192.168.2.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep 12 21:18:26.041: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 12 21:18:26.041: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":45,"skipped":733,"failed":6,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:11:12.911: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
    Sep 12 21:11:19.401: INFO: stderr: ""
    Sep 12 21:11:19.401: INFO: stdout: "true"
    Sep 12 21:11:19.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5825 get pods update-demo-nautilus-crz25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 12 21:11:19.496: INFO: stderr: ""
    Sep 12 21:11:19.497: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 12 21:11:19.497: INFO: validating pod update-demo-nautilus-crz25
    Sep 12 21:14:52.561: INFO: update-demo-nautilus-crz25 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-crz25)

    Sep 12 21:14:57.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5825 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 12 21:14:57.667: INFO: stderr: ""
    Sep 12 21:14:57.667: INFO: stdout: "update-demo-nautilus-7ntgd update-demo-nautilus-crz25 "
    Sep 12 21:14:57.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5825 get pods update-demo-nautilus-7ntgd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 12 21:14:57.757: INFO: stderr: ""
    Sep 12 21:14:57.757: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 12 21:14:57.943: INFO: stderr: ""
    Sep 12 21:14:57.943: INFO: stdout: "true"
    Sep 12 21:14:57.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5825 get pods update-demo-nautilus-crz25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 12 21:14:58.033: INFO: stderr: ""
    Sep 12 21:14:58.033: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 12 21:14:58.033: INFO: validating pod update-demo-nautilus-crz25
    Sep 12 21:18:31.697: INFO: update-demo-nautilus-crz25 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-crz25)

    Sep 12 21:18:36.700: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0036a3800)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 57 lines ...
    Sep 12 21:18:20.012: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.016: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.020: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.026: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:20.058: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:25.063: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.068: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.072: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.080: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.123: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.127: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.131: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.134: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.138: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.142: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:25.169: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:30.063: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.066: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.103: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.106: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.108: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.110: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.113: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:30.135: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:35.062: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.065: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.068: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.070: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.075: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.099: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.102: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.105: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.111: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:35.136: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:40.063: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.067: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.070: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.076: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.109: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.112: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.115: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.124: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:40.154: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:45.063: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.067: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.075: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.083: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.115: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.118: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.121: INFO: Unable to read jessie_udp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441 from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.127: INFO: Unable to read jessie_udp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-8441.svc from pod dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392: the server could not find the requested resource (get pods dns-test-439459ce-dc14-43ce-9560-68c124dfd392)
    Sep 12 21:18:45.167: INFO: Lookups using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8441 wheezy_tcp@dns-test-service.dns-8441 wheezy_udp@dns-test-service.dns-8441.svc wheezy_tcp@dns-test-service.dns-8441.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8441 jessie_tcp@dns-test-service.dns-8441 jessie_udp@dns-test-service.dns-8441.svc jessie_tcp@dns-test-service.dns-8441.svc]

    
    Sep 12 21:18:50.154: INFO: DNS probes using dns-8441/dns-test-439459ce-dc14-43ce-9560-68c124dfd392 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:50.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":87,"skipped":1290,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:52.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8629" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":88,"skipped":1359,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:18:52.154: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-f11e9856-383f-479e-8fa7-3b3dac1b8685
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:18:52.198: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80" in namespace "projected-7923" to be "Succeeded or Failed"

    Sep 12 21:18:52.201: INFO: Pod "pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919417ms
    Sep 12 21:18:54.204: INFO: Pod "pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006661993s
    STEP: Saw pod success
    Sep 12 21:18:54.204: INFO: Pod "pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80" satisfied condition "Succeeded or Failed"

    Sep 12 21:18:54.207: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:18:54.226: INFO: Waiting for pod pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80 to disappear
    Sep 12 21:18:54.228: INFO: Pod pod-projected-configmaps-f03e5569-0ec7-4e09-bb50-34958f5b2a80 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:54.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7923" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":89,"skipped":1377,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":45,"skipped":733,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:18:37.075: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 123 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:57.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1663" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":46,"skipped":733,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:18:59.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-5749" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":47,"skipped":751,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:06.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-9969" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1408,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:08.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-3334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":91,"skipped":1418,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:23.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":48,"skipped":760,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:19:23.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e" in namespace "projected-3027" to be "Succeeded or Failed"

    Sep 12 21:19:23.779: INFO: Pod "downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91927ms
    Sep 12 21:19:25.787: INFO: Pod "downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011284248s
    STEP: Saw pod success
    Sep 12 21:19:25.787: INFO: Pod "downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e" satisfied condition "Succeeded or Failed"

    Sep 12 21:19:25.791: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:19:25.807: INFO: Waiting for pod downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e to disappear
    Sep 12 21:19:25.810: INFO: Pod downwardapi-volume-7b0f196a-19c3-4662-bbcd-8fd53bcdeb3e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:25.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3027" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":776,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:25.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5268" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":50,"skipped":788,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:26.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3541" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":51,"skipped":839,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-8938" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":52,"skipped":883,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:19:26.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8" in namespace "downward-api-2201" to be "Succeeded or Failed"

    Sep 12 21:19:26.350: INFO: Pod "downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.400743ms
    Sep 12 21:19:28.355: INFO: Pod "downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010100817s
    STEP: Saw pod success
    Sep 12 21:19:28.355: INFO: Pod "downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8" satisfied condition "Succeeded or Failed"

    Sep 12 21:19:28.358: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8 container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:19:28.374: INFO: Waiting for pod downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8 to disappear
    Sep 12 21:19:28.377: INFO: Pod downwardapi-volume-5fa266c0-6c17-455a-b65d-5629efb05ae8 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:28.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2201" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":886,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:31.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-2516" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":54,"skipped":887,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:33.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2996" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":55,"skipped":927,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:19:33.733: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-eb9ce0c0-8e91-4e82-8938-54c4bc16bb7b
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:19:33.777: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1" in namespace "projected-3221" to be "Succeeded or Failed"

    Sep 12 21:19:33.781: INFO: Pod "pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572818ms
    Sep 12 21:19:35.786: INFO: Pod "pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008822164s
    STEP: Saw pod success
    Sep 12 21:19:35.786: INFO: Pod "pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1" satisfied condition "Succeeded or Failed"

    Sep 12 21:19:35.792: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:19:35.822: INFO: Waiting for pod pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1 to disappear
    Sep 12 21:19:35.824: INFO: Pod pod-projected-configmaps-24bd7f5d-0ede-4998-b966-eedf1793d3b1 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:35.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":947,"failed":7,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:36.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8057" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":92,"skipped":1426,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:41.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-846" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":93,"skipped":1445,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:41.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7141" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":94,"skipped":1465,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:19:41.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0" in namespace "downward-api-659" to be "Succeeded or Failed"

    Sep 12 21:19:41.930: INFO: Pod "downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818069ms
    Sep 12 21:19:43.934: INFO: Pod "downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007012219s
    STEP: Saw pod success
    Sep 12 21:19:43.935: INFO: Pod "downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0" satisfied condition "Succeeded or Failed"

    Sep 12 21:19:43.938: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0 container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:19:43.953: INFO: Waiting for pod downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0 to disappear
    Sep 12 21:19:43.956: INFO: Pod downwardapi-volume-57fdceef-0a63-42dc-92f9-7358c7fbeed0 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:43.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-659" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1476,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:48.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-6817" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":96,"skipped":1482,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:19:48.922: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-caacb5fe-4667-4d4a-94e8-c3da7ab539cc
    STEP: Creating a pod to test consume secrets
    Sep 12 21:19:48.963: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae" in namespace "projected-3765" to be "Succeeded or Failed"

    Sep 12 21:19:48.965: INFO: Pod "pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150027ms
    Sep 12 21:19:50.970: INFO: Pod "pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007396566s
    STEP: Saw pod success
    Sep 12 21:19:50.970: INFO: Pod "pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae" satisfied condition "Succeeded or Failed"

    Sep 12 21:19:50.974: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:19:50.994: INFO: Waiting for pod pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae to disappear
    Sep 12 21:19:50.997: INFO: Pod pod-projected-secrets-4559444b-e527-44a6-8ff7-8170bb5e39ae no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:19:50.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3765" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":97,"skipped":1554,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:03.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-7679" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":98,"skipped":1584,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:04.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-110" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":99,"skipped":1601,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:04.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-3356" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":100,"skipped":1615,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:17.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6065" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":101,"skipped":1682,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:17.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-2608" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":102,"skipped":1692,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:18.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-9985" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":103,"skipped":1763,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:24.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-153" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":104,"skipped":1765,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:37.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8805" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":105,"skipped":1797,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:20:46.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-6637" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":106,"skipped":1808,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller affinity-clusterip-transition in namespace services-4074
    I0912 21:19:35.905114      16 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4074, replica count: 3
    I0912 21:19:38.956776      16 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep 12 21:19:38.963: INFO: Creating new exec pod
    Sep 12 21:19:41.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:44.159: INFO: rc: 1
    Sep 12 21:19:44.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:19:45.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:47.338: INFO: rc: 1
    Sep 12 21:19:47.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:19:48.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:50.348: INFO: rc: 1
    Sep 12 21:19:50.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:19:51.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:53.344: INFO: rc: 1
    Sep 12 21:19:53.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:19:54.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:56.391: INFO: rc: 1
    Sep 12 21:19:56.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:19:57.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:19:59.352: INFO: rc: 1
    Sep 12 21:19:59.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:00.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:02.358: INFO: rc: 1
    Sep 12 21:20:02.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:03.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:05.343: INFO: rc: 1
    Sep 12 21:20:05.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:06.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:08.378: INFO: rc: 1
    Sep 12 21:20:08.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:09.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:11.415: INFO: rc: 1
    Sep 12 21:20:11.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:12.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:14.366: INFO: rc: 1
    Sep 12 21:20:14.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:15.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:17.326: INFO: rc: 1
    Sep 12 21:20:17.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:18.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:20.342: INFO: rc: 1
    Sep 12 21:20:20.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:21.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:23.340: INFO: rc: 1
    Sep 12 21:20:23.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:24.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:26.358: INFO: rc: 1
    Sep 12 21:20:26.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:27.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:29.390: INFO: rc: 1
    Sep 12 21:20:29.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:30.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:32.341: INFO: rc: 1
    Sep 12 21:20:32.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:33.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:35.331: INFO: rc: 1
    Sep 12 21:20:35.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:36.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:38.340: INFO: rc: 1
    Sep 12 21:20:38.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:39.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:41.340: INFO: rc: 1
    Sep 12 21:20:41.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:42.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:44.341: INFO: rc: 1
    Sep 12 21:20:44.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:45.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:47.339: INFO: rc: 1
    Sep 12 21:20:47.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:48.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:50.349: INFO: rc: 1
    Sep 12 21:20:50.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:51.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:53.329: INFO: rc: 1
    Sep 12 21:20:53.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:54.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:56.352: INFO: rc: 1
    Sep 12 21:20:56.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:20:57.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:20:59.345: INFO: rc: 1
    Sep 12 21:20:59.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:00.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:02.349: INFO: rc: 1
    Sep 12 21:21:02.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:03.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:05.337: INFO: rc: 1
    Sep 12 21:21:05.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:06.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:08.344: INFO: rc: 1
    Sep 12 21:21:08.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:09.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:11.346: INFO: rc: 1
    Sep 12 21:21:11.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:12.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:14.324: INFO: rc: 1
    Sep 12 21:21:14.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:15.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:17.344: INFO: rc: 1
    Sep 12 21:21:17.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:18.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:20.341: INFO: rc: 1
    Sep 12 21:21:20.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:21.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:23.336: INFO: rc: 1
    Sep 12 21:21:23.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:24.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:26.338: INFO: rc: 1
    Sep 12 21:21:26.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:27.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:29.343: INFO: rc: 1
    Sep 12 21:21:29.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:30.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:32.342: INFO: rc: 1
    Sep 12 21:21:32.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:33.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:35.357: INFO: rc: 1
    Sep 12 21:21:35.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:36.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:38.331: INFO: rc: 1
    Sep 12 21:21:38.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:39.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:41.359: INFO: rc: 1
    Sep 12 21:21:41.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:42.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:44.356: INFO: rc: 1
    Sep 12 21:21:44.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:44.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep 12 21:21:46.523: INFO: rc: 1
    Sep 12 21:21:46.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4074 exec execpod-affinity8d6qm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip-transition 80
    nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:21:46.523: FAIL: Unexpected error:

        <*errors.errorString | 0xc0009a8260>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [144.696 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 12 21:21:46.523: Unexpected error:

          <*errors.errorString | 0xc0009a8260>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576
    ------------------------------
    {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":56,"skipped":954,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:00.543: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
    STEP: Destroying namespace "services-8460" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":57,"skipped":954,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:36.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-2464" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":107,"skipped":1839,"failed":0}

    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:36.887: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:43.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2829" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":108,"skipped":1839,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:43.039: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-24c0dbf6-186b-4a52-be12-0f0243440d23
    STEP: Creating a pod to test consume secrets
    Sep 12 21:22:43.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679" in namespace "projected-2166" to be "Succeeded or Failed"

    Sep 12 21:22:43.082: INFO: Pod "pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.707006ms
    Sep 12 21:22:45.087: INFO: Pod "pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007463315s
    STEP: Saw pod success
    Sep 12 21:22:45.087: INFO: Pod "pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679" satisfied condition "Succeeded or Failed"

    Sep 12 21:22:45.090: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:22:45.121: INFO: Waiting for pod pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679 to disappear
    Sep 12 21:22:45.125: INFO: Pod pod-projected-secrets-8f8cbaee-b195-4c03-b449-361275ea7679 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:45.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2166" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":109,"skipped":1840,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:22:45.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de" in namespace "downward-api-5752" to be "Succeeded or Failed"

    Sep 12 21:22:45.203: INFO: Pod "downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de": Phase="Pending", Reason="", readiness=false. Elapsed: 5.383552ms
    Sep 12 21:22:47.208: INFO: Pod "downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010107733s
    STEP: Saw pod success
    Sep 12 21:22:47.208: INFO: Pod "downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de" satisfied condition "Succeeded or Failed"

    Sep 12 21:22:47.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:22:47.227: INFO: Waiting for pod downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de to disappear
    Sep 12 21:22:47.230: INFO: Pod downwardapi-volume-7bec32cc-b5c7-4ecb-a7d3-b276547d97de no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:50.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-8050" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":58,"skipped":979,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:50.825: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-4a56b4bf-5caa-4fcf-9fcd-4772f1c847fe
    STEP: Creating a pod to test consume secrets
    Sep 12 21:22:50.865: INFO: Waiting up to 5m0s for pod "pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500" in namespace "secrets-8310" to be "Succeeded or Failed"

    Sep 12 21:22:50.869: INFO: Pod "pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500": Phase="Pending", Reason="", readiness=false. Elapsed: 3.934533ms
    Sep 12 21:22:52.874: INFO: Pod "pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008887372s
    STEP: Saw pod success
    Sep 12 21:22:52.874: INFO: Pod "pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500" satisfied condition "Succeeded or Failed"

    Sep 12 21:22:52.877: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:22:52.894: INFO: Waiting for pod pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500 to disappear
    Sep 12 21:22:52.896: INFO: Pod pod-secrets-fc1ab1e9-b7f7-4919-b7df-eb24bd35d500 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:52.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8310" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1026,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":60,"skipped":1027,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:55.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-4223" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":61,"skipped":1044,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-3822" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":62,"skipped":1064,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":110,"skipped":1853,"failed":0}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:47.246: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8133" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":111,"skipped":1853,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:55.332: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 12 21:22:55.362: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:57.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2306" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":63,"skipped":1083,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:22:59.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-3394" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":112,"skipped":1915,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:22:58.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82" in namespace "downward-api-1300" to be "Succeeded or Failed"

    Sep 12 21:22:58.076: INFO: Pod "downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121287ms
    Sep 12 21:23:00.080: INFO: Pod "downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008276417s
    STEP: Saw pod success
    Sep 12 21:23:00.080: INFO: Pod "downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82" satisfied condition "Succeeded or Failed"

    Sep 12 21:23:00.083: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82 container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:23:00.099: INFO: Waiting for pod downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82 to disappear
    Sep 12 21:23:00.101: INFO: Pod downwardapi-volume-2a2e5008-ffce-4952-ad9f-966095f08d82 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:00.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1300" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1127,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:22:59.550: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep 12 21:23:01.599: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:01.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-3860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":113,"skipped":1920,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 68 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:21.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-3389" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":114,"skipped":1928,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:23:21.766: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-657e755d-2fdb-4483-a421-f87e1b43aad8
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:23:21.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f" in namespace "configmap-6380" to be "Succeeded or Failed"

    Sep 12 21:23:21.822: INFO: Pod "pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.876765ms
    Sep 12 21:23:23.826: INFO: Pod "pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012430948s
    STEP: Saw pod success
    Sep 12 21:23:23.826: INFO: Pod "pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f" satisfied condition "Succeeded or Failed"

    Sep 12 21:23:23.829: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:23:23.856: INFO: Waiting for pod pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f to disappear
    Sep 12 21:23:23.859: INFO: Pod pod-configmaps-642fd751-94e7-4c10-9db7-ce1c84f2f38f no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:23.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6380" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":115,"skipped":1950,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:24.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":116,"skipped":1966,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:24.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-1581" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":117,"skipped":2011,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:23:24.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d" in namespace "projected-5049" to be "Succeeded or Failed"

    Sep 12 21:23:24.221: INFO: Pod "downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.368665ms
    Sep 12 21:23:26.226: INFO: Pod "downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007876103s
    STEP: Saw pod success
    Sep 12 21:23:26.226: INFO: Pod "downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d" satisfied condition "Succeeded or Failed"

    Sep 12 21:23:26.229: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:23:26.248: INFO: Waiting for pod downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d to disappear
    Sep 12 21:23:26.251: INFO: Pod downwardapi-volume-ed3703a0-4924-4cdd-8b23-72f12663284d no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:26.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5049" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":118,"skipped":2023,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:23:26.277: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep 12 21:23:26.321: INFO: Waiting up to 5m0s for pod "test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240" in namespace "svcaccounts-214" to be "Succeeded or Failed"

    Sep 12 21:23:26.324: INFO: Pod "test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03477ms
    Sep 12 21:23:28.328: INFO: Pod "test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006779236s
    STEP: Saw pod success
    Sep 12 21:23:28.328: INFO: Pod "test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240" satisfied condition "Succeeded or Failed"

    Sep 12 21:23:28.331: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:23:28.347: INFO: Waiting for pod test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240 to disappear
    Sep 12 21:23:28.350: INFO: Pod test-pod-5c44dd42-b14b-4119-ab6d-dbea84361240 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:28.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-214" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":119,"skipped":2030,"failed":0}

    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:23:28.361: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename watch
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:28.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-2474" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":120,"skipped":2030,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":65,"skipped":1155,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:23:03.763: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    Sep 12 21:23:05.854: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.858: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.872: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.876: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.881: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.886: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:05.895: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:10.900: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.903: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.906: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.909: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.920: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.924: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.927: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.931: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:10.936: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:15.901: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.905: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.909: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.914: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.928: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.932: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.935: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.939: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:15.946: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:20.901: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.904: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.908: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.911: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.922: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.926: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.933: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.937: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:20.943: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:25.903: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.907: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.911: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.915: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.930: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.934: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.938: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.942: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:25.950: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:30.900: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.904: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.907: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.910: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.922: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.925: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.929: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.932: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local from pod dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5: the server could not find the requested resource (get pods dns-test-15a02c16-11f1-4405-ab5d-785c66020df5)
    Sep 12 21:23:30.938: INFO: Lookups using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7031.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7031.svc.cluster.local jessie_udp@dns-test-service-2.dns-7031.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7031.svc.cluster.local]

    
    Sep 12 21:23:35.934: INFO: DNS probes using dns-7031/dns-test-15a02c16-11f1-4405-ab5d-785c66020df5 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:35.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7031" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":66,"skipped":1155,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:38.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-1401" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":121,"skipped":2046,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:40.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-3550" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":67,"skipped":1160,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:23:40.106: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-8df19bb7-0569-4b02-bb5d-1e6b96c0a0fa
    STEP: Creating a pod to test consume secrets
    Sep 12 21:23:40.149: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa" in namespace "projected-6219" to be "Succeeded or Failed"

    Sep 12 21:23:40.152: INFO: Pod "pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946619ms
    Sep 12 21:23:42.157: INFO: Pod "pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007512175s
    STEP: Saw pod success
    Sep 12 21:23:42.157: INFO: Pod "pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa" satisfied condition "Succeeded or Failed"

    Sep 12 21:23:42.159: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:23:42.180: INFO: Waiting for pod pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa to disappear
    Sep 12 21:23:42.183: INFO: Pod pod-projected-secrets-64794079-94e2-447e-8b6e-b9dae12debaa no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:42.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6219" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1167,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:43.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-1294" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":69,"skipped":1191,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:23:50.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1143" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":70,"skipped":1192,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-6909" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":71,"skipped":1208,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1560,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:18:26.058: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
    Sep 12 21:18:53.478: INFO: Waiting for responses: map[netserver-3:{}]
    Sep 12 21:18:55.479: INFO: 
    Output of kubectl describe pod pod-network-test-3656/netserver-0:
    
    Sep 12 21:18:55.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3656 describe pod netserver-0 --namespace=pod-network-test-3656'
    Sep 12 21:18:55.601: INFO: stderr: ""
    Sep 12 21:18:55.601: INFO: stdout: "Name:         netserver-0\nNamespace:    pod-network-test-3656\nPriority:     0\nNode:         k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x/172.18.0.4\nStart Time:   Mon, 12 Sep 2022 21:18:26 +0000\nLabels:       selector-91526dba-45f4-4306-b174-42844065095d=true\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.104\nIPs:\n  IP:  192.168.0.104\nContainers:\n  webserver:\n    Container ID:  containerd://a3ff4baf1129b638a9bac7e328db891cd2ae774700ff1f9da381b416188c1837\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Mon, 12 Sep 2022 21:18:28 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rlms (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-8rlms:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason       Age   From               Message\n  ----     ------       ----  ----               -------\n  Normal   Scheduled    29s   default-scheduler  Successfully assigned pod-network-test-3656/netserver-0 to k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n  Warning  FailedMount  28s   kubelet            MountVolume.SetUp failed for volume \"kube-api-access-8rlms\" : failed to sync configmap cache: timed out waiting for the condition\n  Normal   Pulled       27s   kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal   Created      27s   kubelet            Created container webserver\n  Normal   Started      27s   kubelet            Started container webserver\n"

    Sep 12 21:18:55.601: INFO: Name:         netserver-0
    Namespace:    pod-network-test-3656
    Priority:     0
    Node:         k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x/172.18.0.4
    Start Time:   Mon, 12 Sep 2022 21:18:26 +0000
    Labels:       selector-91526dba-45f4-4306-b174-42844065095d=true
... skipping 40 lines ...
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age   From               Message
      ----     ------       ----  ----               -------
      Normal   Scheduled    29s   default-scheduler  Successfully assigned pod-network-test-3656/netserver-0 to k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x
      Warning  FailedMount  28s   kubelet            MountVolume.SetUp failed for volume "kube-api-access-8rlms" : failed to sync configmap cache: timed out waiting for the condition

      Normal   Pulled       27s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal   Created      27s   kubelet            Created container webserver
      Normal   Started      27s   kubelet            Started container webserver
    
    Sep 12 21:18:55.601: INFO: 
    Output of kubectl describe pod pod-network-test-3656/netserver-1:
... skipping 178 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  29s   default-scheduler  Successfully assigned pod-network-test-3656/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep 12 21:18:55.957: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.86:9080/dial?request=hostname&protocol=http&host=192.168.2.48&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:18:55.957: INFO: ...failed...will try again in next pass

    Sep 12 21:18:55.957: INFO: Going to retry 1 out of 4 pods....
    Sep 12 21:18:55.957: INFO: Doublechecking 1 pods in host 172.18.0.7 which werent seen the first time.
    Sep 12 21:18:55.957: INFO: Now attempting to probe pod [[[ 192.168.2.48 ]]]
    Sep 12 21:18:55.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.86:9080/dial?request=hostname&protocol=http&host=192.168.2.48&port=8080&tries=1'] Namespace:pod-network-test-3656 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 12 21:18:55.961: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 12 21:19:01.043: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 134 lines ...
    Sep 12 21:24:20.168: INFO: Waiting for responses: map[netserver-3:{}]
    Sep 12 21:24:22.170: INFO: 
    Output of kubectl describe pod pod-network-test-3656/netserver-0:
    
    Sep 12 21:24:22.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3656 describe pod netserver-0 --namespace=pod-network-test-3656'
    Sep 12 21:24:22.283: INFO: stderr: ""
    Sep 12 21:24:22.283: INFO: stdout: "Name:         netserver-0\nNamespace:    pod-network-test-3656\nPriority:     0\nNode:         k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x/172.18.0.4\nStart Time:   Mon, 12 Sep 2022 21:18:26 +0000\nLabels:       selector-91526dba-45f4-4306-b174-42844065095d=true\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.104\nIPs:\n  IP:  192.168.0.104\nContainers:\n  webserver:\n    Container ID:  containerd://a3ff4baf1129b638a9bac7e328db891cd2ae774700ff1f9da381b416188c1837\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Mon, 12 Sep 2022 21:18:28 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rlms (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-8rlms:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type     Reason       Age    From               Message\n  ----     ------       ----   ----               -------\n  Normal   Scheduled    5m56s  default-scheduler  Successfully assigned pod-network-test-3656/netserver-0 to k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n  Warning  FailedMount  5m55s  kubelet            MountVolume.SetUp failed for volume \"kube-api-access-8rlms\" : failed to sync configmap cache: timed out waiting for the condition\n  Normal   Pulled       5m54s  kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal   Created      5m54s  kubelet            Created container webserver\n  Normal   Started      5m54s  kubelet            Started container webserver\n"

    Sep 12 21:24:22.283: INFO: Name:         netserver-0
    Namespace:    pod-network-test-3656
    Priority:     0
    Node:         k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x/172.18.0.4
    Start Time:   Mon, 12 Sep 2022 21:18:26 +0000
    Labels:       selector-91526dba-45f4-4306-b174-42844065095d=true
... skipping 40 lines ...
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age    From               Message
      ----     ------       ----   ----               -------
      Normal   Scheduled    5m56s  default-scheduler  Successfully assigned pod-network-test-3656/netserver-0 to k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x
      Warning  FailedMount  5m55s  kubelet            MountVolume.SetUp failed for volume "kube-api-access-8rlms" : failed to sync configmap cache: timed out waiting for the condition

      Normal   Pulled       5m54s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal   Created      5m54s  kubelet            Created container webserver
      Normal   Started      5m54s  kubelet            Started container webserver
    
    Sep 12 21:24:22.283: INFO: 
    Output of kubectl describe pod pod-network-test-3656/netserver-1:
... skipping 178 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m56s  default-scheduler  Successfully assigned pod-network-test-3656/netserver-3 to k8s-upgrade-and-conformance-6izh7i-worker-mgm4ov
      Normal  Pulled     5m56s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m56s  kubelet            Created container webserver
      Normal  Started    5m56s  kubelet            Started container webserver
    
    Sep 12 21:24:22.630: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.86:9080/dial?request=hostname&protocol=http&host=192.168.2.48&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 12 21:24:22.630: INFO: ... Done probing pod [[[ 192.168.2.48 ]]]
    Sep 12 21:24:22.630: INFO: succeeded at polling 3 out of 4 connections
    Sep 12 21:24:22.630: INFO: pod polling failure summary:
    Sep 12 21:24:22.630: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.86:9080/dial?request=hostname&protocol=http&host=192.168.2.48&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep 12 21:24:22.630: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a70480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 12 21:24:22.630: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1560,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:24:22.653: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-0bbddfaa-8e82-4715-8d4d-f20fe972a633
    STEP: Creating a pod to test consume secrets
    Sep 12 21:24:22.697: INFO: Waiting up to 5m0s for pod "pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20" in namespace "secrets-638" to be "Succeeded or Failed"

    Sep 12 21:24:22.701: INFO: Pod "pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20": Phase="Pending", Reason="", readiness=false. Elapsed: 3.704568ms
    Sep 12 21:24:24.704: INFO: Pod "pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007110129s
    STEP: Saw pod success
    Sep 12 21:24:24.704: INFO: Pod "pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20" satisfied condition "Succeeded or Failed"

    Sep 12 21:24:24.707: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20 container secret-env-test: <nil>
    STEP: delete the pod
    Sep 12 21:24:24.718: INFO: Waiting for pod pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20 to disappear
    Sep 12 21:24:24.722: INFO: Pod pod-secrets-8039600f-80ef-49ed-a6a7-a95cee028e20 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:24:24.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-638" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1565,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:24:30.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1579,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:24:32.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-9966" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1590,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.640 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1224,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:28:07.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7942" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":73,"skipped":1227,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 12 21:28:11.596: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:11.600: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:11.600: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:16.605: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:16.609: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:16.609: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:21.607: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:21.609: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:21.609: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:26.606: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:26.609: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:26.609: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:31.606: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:31.610: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:31.610: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:36.605: INFO: File wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:36.609: INFO: File jessie_udp@dns-test-service-3.dns-79.svc.cluster.local from pod  dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 12 21:28:36.609: INFO: Lookups using dns-79/dns-test-2e54b065-1b80-4390-94f4-06f4da772513 failed for: [wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local jessie_udp@dns-test-service-3.dns-79.svc.cluster.local]

    
    Sep 12 21:28:41.610: INFO: DNS probes using dns-test-2e54b065-1b80-4390-94f4-06f4da772513 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-79.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-79.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:28:47.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-79" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":74,"skipped":1245,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:28:47.733: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-32f48c8b-1519-4f9e-9472-0ebe6e39ea72
    STEP: Creating a pod to test consume configMaps
    Sep 12 21:28:47.911: INFO: Waiting up to 5m0s for pod "pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5" in namespace "configmap-5365" to be "Succeeded or Failed"

    Sep 12 21:28:47.969: INFO: Pod "pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.385018ms
    Sep 12 21:28:49.973: INFO: Pod "pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062573165s
    STEP: Saw pod success
    Sep 12 21:28:49.973: INFO: Pod "pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5" satisfied condition "Succeeded or Failed"

    Sep 12 21:28:49.976: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-m8lgv pod pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 12 21:28:49.989: INFO: Waiting for pod pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5 to disappear
    Sep 12 21:28:49.992: INFO: Pod pod-configmaps-74556573-4076-4492-be84-6953f41b1ff5 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:28:49.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5365" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1245,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:28:54.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1086" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1261,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    • [SLOW TEST:322.083 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":122,"skipped":2066,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:29:10.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2921" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":123,"skipped":2070,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:29:35.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6162" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":77,"skipped":1290,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:29:46.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":78,"skipped":1317,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:29:46.723: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep 12 21:29:46.763: INFO: Waiting up to 5m0s for pod "var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab" in namespace "var-expansion-3402" to be "Succeeded or Failed"

    Sep 12 21:29:46.766: INFO: Pod "var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.041751ms
    Sep 12 21:29:48.771: INFO: Pod "var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007452735s
    STEP: Saw pod success
    Sep 12 21:29:48.771: INFO: Pod "var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab" satisfied condition "Succeeded or Failed"

    Sep 12 21:29:48.774: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab container dapi-container: <nil>
    STEP: delete the pod
    Sep 12 21:29:48.806: INFO: Waiting for pod var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab to disappear
    Sep 12 21:29:48.810: INFO: Pod var-expansion-62ef2353-cfa4-45e5-94cf-468c65ee59ab no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:29:48.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3402" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1329,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "crd-webhook-2032" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":80,"skipped":1331,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:03.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3279" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":81,"skipped":1339,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9688" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":82,"skipped":1353,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:20.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-8409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1374,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep 12 21:30:21.846: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep 12 21:30:21.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4488 describe pod agnhost-primary-x4854'
    Sep 12 21:30:21.957: INFO: stderr: ""
    Sep 12 21:30:21.957: INFO: stdout: "Name:         agnhost-primary-x4854\nNamespace:    kubectl-4488\nPriority:     0\nNode:         k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x/172.18.0.4\nStart Time:   Mon, 12 Sep 2022 21:30:20 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.128\nIPs:\n  IP:           192.168.0.128\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://58c2ebc71c1738427f8b6f88ce44ffdacf1cc16c48395a763abb323f0b24450e\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 12 Sep 2022 21:30:21 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4jnz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-j4jnz:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-4488/agnhost-primary-x4854 to k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n  Normal  Pulled     0s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    0s    kubelet            Created container agnhost-primary\n  Normal  Started    0s    kubelet            Started container agnhost-primary\n"
    Sep 12 21:30:21.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4488 describe rc agnhost-primary'
    Sep 12 21:30:22.092: INFO: stderr: ""
    Sep 12 21:30:22.092: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-4488\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-x4854\n"

    Sep 12 21:30:22.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4488 describe service agnhost-primary'
    Sep 12 21:30:22.196: INFO: stderr: ""
    Sep 12 21:30:22.196: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-4488\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.140.249.228\nIPs:               10.140.249.228\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.0.128:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep 12 21:30:22.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4488 describe node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x'
    Sep 12 21:30:22.326: INFO: stderr: ""
    Sep 12 21:30:22.326: INFO: stdout: "Name:               k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n                    kubernetes.io/os=linux\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-6izh7i\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-n56wbd\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n                    cluster.x-k8s.io/owner-kind: MachineSet\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 12 Sep 2022 20:42:33 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 12 Sep 2022 21:30:21 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 12 Sep 2022 21:28:54 +0000   Mon, 12 Sep 2022 20:42:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 12 Sep 2022 21:28:54 +0000   Mon, 12 Sep 2022 20:42:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 12 Sep 2022 21:28:54 +0000   Mon, 12 Sep 2022 20:42:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 12 Sep 2022 21:28:54 +0000   Mon, 12 Sep 2022 20:42:53 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 c541c797a6844b8f832b5f9af809dfd6\n  System UUID:                4dc5a39c-42b0-4292-a6c5-5f93e9cbbb80\n  Boot ID:                    97191b84-aaae-49cf-bfab-7d2bac53b2d9\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.0.0/24\nPodCIDRs:                     192.168.0.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x\nNon-terminated Pods:          (5 in total)\n  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---\n  kube-system                 kindnet-tr4h2                         100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      47m\n  kube-system                 kube-proxy-hmpd2                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         47m\n  kubectl-4488                agnhost-primary-x4854                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  services-5303               affinity-nodeport-transition-gfj9d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         44m\n  services-5303               execpod-affinity5twzk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         44m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                100m (1%)  100m (1%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type    Reason    Age   From        Message\n  ----    ------    ----  ----        -------\n  Normal  Starting  47m   kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:22.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4488" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":84,"skipped":1400,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-4612-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":85,"skipped":1403,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:30:26.286: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 12 21:30:26.343: INFO: Waiting up to 5m0s for pod "pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6" in namespace "emptydir-2737" to be "Succeeded or Failed"

    Sep 12 21:30:26.346: INFO: Pod "pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.952117ms
    Sep 12 21:30:28.351: INFO: Pod "pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007000733s
    STEP: Saw pod success
    Sep 12 21:30:28.351: INFO: Pod "pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6" satisfied condition "Succeeded or Failed"

    Sep 12 21:30:28.354: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6 container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:30:28.370: INFO: Waiting for pod pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6 to disappear
    Sep 12 21:30:28.372: INFO: Pod pod-03a84bae-fd48-4bb4-a08f-4608db3d83c6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:28.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1419,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:31.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":87,"skipped":1464,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 122 lines ...
    Sep 12 21:25:33.524: INFO: ss-1  k8s-upgrade-and-conformance-6izh7i-worker-938c6l  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 21:24:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 21:25:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-12 21:25:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-12 21:24:53 +0000 UTC  }]
    Sep 12 21:25:33.524: INFO: 
    Sep 12 21:25:33.524: INFO: StatefulSet ss has not reached scale 0, at 1
    STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8809
    Sep 12 21:25:34.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:25:34.645: INFO: rc: 1
    Sep 12 21:25:34.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("webserver")

    
    error:

    exit status 1
    Sep 12 21:25:44.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:25:44.745: INFO: rc: 1
    Sep 12 21:25:44.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:25:54.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:25:54.843: INFO: rc: 1
    Sep 12 21:25:54.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:04.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:04.940: INFO: rc: 1
    Sep 12 21:26:04.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:14.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:15.032: INFO: rc: 1
    Sep 12 21:26:15.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:25.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:25.133: INFO: rc: 1
    Sep 12 21:26:25.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:35.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:35.229: INFO: rc: 1
    Sep 12 21:26:35.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:45.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:45.325: INFO: rc: 1
    Sep 12 21:26:45.325: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:26:55.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:26:55.449: INFO: rc: 1
    Sep 12 21:26:55.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:05.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:05.542: INFO: rc: 1
    Sep 12 21:27:05.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:15.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:15.640: INFO: rc: 1
    Sep 12 21:27:15.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:25.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:25.734: INFO: rc: 1
    Sep 12 21:27:25.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:35.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:35.826: INFO: rc: 1
    Sep 12 21:27:35.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:45.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:45.925: INFO: rc: 1
    Sep 12 21:27:45.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:27:55.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:27:56.018: INFO: rc: 1
    Sep 12 21:27:56.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:06.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:06.107: INFO: rc: 1
    Sep 12 21:28:06.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:16.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:16.478: INFO: rc: 1
    Sep 12 21:28:16.478: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:26.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:26.572: INFO: rc: 1
    Sep 12 21:28:26.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:36.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:36.667: INFO: rc: 1
    Sep 12 21:28:36.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:46.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:46.789: INFO: rc: 1
    Sep 12 21:28:46.789: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:28:56.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:28:56.880: INFO: rc: 1
    Sep 12 21:28:56.880: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:06.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:06.973: INFO: rc: 1
    Sep 12 21:29:06.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:16.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:17.075: INFO: rc: 1
    Sep 12 21:29:17.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:27.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:27.206: INFO: rc: 1
    Sep 12 21:29:27.206: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:37.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:37.301: INFO: rc: 1
    Sep 12 21:29:37.301: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:47.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:47.433: INFO: rc: 1
    Sep 12 21:29:47.433: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:29:57.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:29:57.566: INFO: rc: 1
    Sep 12 21:29:57.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:30:07.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:30:07.667: INFO: rc: 1
    Sep 12 21:30:07.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:30:17.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:30:17.767: INFO: rc: 1
    Sep 12 21:30:17.767: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:30:27.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:30:27.861: INFO: rc: 1
    Sep 12 21:30:27.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep 12 21:30:37.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8809 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 12 21:30:37.955: INFO: rc: 1
    Sep 12 21:30:37.955: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
    Sep 12 21:30:37.956: INFO: Scaling statefulset ss to 0
    Sep 12 21:30:37.974: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":84,"skipped":1628,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:41.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-8306" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":85,"skipped":1655,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:30:41.140: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-08cf8dcd-7d6e-42d7-a7f6-784af77f051f
    STEP: Creating a pod to test consume secrets
    Sep 12 21:30:41.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399" in namespace "projected-2169" to be "Succeeded or Failed"

    Sep 12 21:30:41.188: INFO: Pod "pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55408ms
    Sep 12 21:30:43.192: INFO: Pod "pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007479302s
    STEP: Saw pod success
    Sep 12 21:30:43.192: INFO: Pod "pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399" satisfied condition "Succeeded or Failed"

    Sep 12 21:30:43.197: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:30:43.224: INFO: Waiting for pod pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399 to disappear
    Sep 12 21:30:43.227: INFO: Pod pod-projected-secrets-b508db1c-5cd4-4da8-a022-6349029d7399 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:43.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2169" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1663,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:43.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-3384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":87,"skipped":1668,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:43.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-3859" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":88,"skipped":1676,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:43.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-5450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":89,"skipped":1693,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:30:43.965: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 21:30:44.011: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-04af4d92-0b3c-4628-8995-502c5d4ae37d" in namespace "security-context-test-3220" to be "Succeeded or Failed"

    Sep 12 21:30:44.015: INFO: Pod "alpine-nnp-false-04af4d92-0b3c-4628-8995-502c5d4ae37d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3776ms
    Sep 12 21:30:46.020: INFO: Pod "alpine-nnp-false-04af4d92-0b3c-4628-8995-502c5d4ae37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008883427s
    Sep 12 21:30:48.024: INFO: Pod "alpine-nnp-false-04af4d92-0b3c-4628-8995-502c5d4ae37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013122188s
    Sep 12 21:30:48.024: INFO: Pod "alpine-nnp-false-04af4d92-0b3c-4628-8995-502c5d4ae37d" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:48.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3220" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1696,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:50.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-1387" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":91,"skipped":1719,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:55.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1016" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":92,"skipped":1720,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    Sep 12 21:30:39.617: INFO: stdout: ""
    Sep 12 21:30:40.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 12 21:30:40.599: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
    Sep 12 21:30:40.599: INFO: stdout: "externalname-service-sljr2"
    Sep 12 21:30:40.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:42.789: INFO: rc: 1
    Sep 12 21:30:42.789: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.135.97.142 80
    nc: connect to 10.135.97.142 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:30:43.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:45.994: INFO: rc: 1
    Sep 12 21:30:45.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.135.97.142 80
    nc: connect to 10.135.97.142 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:30:46.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:48.966: INFO: rc: 1
    Sep 12 21:30:48.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.135.97.142 80
    nc: connect to 10.135.97.142 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:30:49.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:51.993: INFO: rc: 1
    Sep 12 21:30:51.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.135.97.142 80
    nc: connect to 10.135.97.142 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:30:52.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:52.990: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.97.142 80\nConnection to 10.135.97.142 80 port [tcp/http] succeeded!\n"
    Sep 12 21:30:52.990: INFO: stdout: ""
    Sep 12 21:30:53.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:55.972: INFO: rc: 1
    Sep 12 21:30:55.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.135.97.142 80
    nc: connect to 10.135.97.142 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 12 21:30:56.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.97.142 80'
    Sep 12 21:30:56.967: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.97.142 80\nConnection to 10.135.97.142 80 port [tcp/http] succeeded!\n"
    Sep 12 21:30:56.967: INFO: stdout: "externalname-service-sljr2"
    Sep 12 21:30:56.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8565 exec execpoddlzp6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31011'
... skipping 9 lines ...
    STEP: Destroying namespace "services-8565" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":88,"skipped":1487,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 12 21:30:55.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0" in namespace "projected-3061" to be "Succeeded or Failed"

    Sep 12 21:30:55.373: INFO: Pod "downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.618604ms
    Sep 12 21:30:57.384: INFO: Pod "downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019823635s
    STEP: Saw pod success
    Sep 12 21:30:57.384: INFO: Pod "downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0" satisfied condition "Succeeded or Failed"

    Sep 12 21:30:57.390: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0 container client-container: <nil>
    STEP: delete the pod
    Sep 12 21:30:57.418: INFO: Waiting for pod downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0 to disappear
    Sep 12 21:30:57.421: INFO: Pod downwardapi-volume-8570dd25-ed6e-4f84-a10f-bb62d8a4f5b0 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:57.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3061" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":93,"skipped":1744,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:30:57.383: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 12 21:30:57.436: INFO: Waiting up to 5m0s for pod "pod-65a07c07-2007-44b3-a8bd-a17a7145409e" in namespace "emptydir-9585" to be "Succeeded or Failed"

    Sep 12 21:30:57.446: INFO: Pod "pod-65a07c07-2007-44b3-a8bd-a17a7145409e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.570211ms
    Sep 12 21:30:59.451: INFO: Pod "pod-65a07c07-2007-44b3-a8bd-a17a7145409e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014791858s
    STEP: Saw pod success
    Sep 12 21:30:59.451: INFO: Pod "pod-65a07c07-2007-44b3-a8bd-a17a7145409e" satisfied condition "Succeeded or Failed"

    Sep 12 21:30:59.454: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-md-0-bgx6t-66bf5d755d-dmc7x pod pod-65a07c07-2007-44b3-a8bd-a17a7145409e container test-container: <nil>
    STEP: delete the pod
    Sep 12 21:30:59.471: INFO: Waiting for pod pod-65a07c07-2007-44b3-a8bd-a17a7145409e to disappear
    Sep 12 21:30:59.473: INFO: Pod pod-65a07c07-2007-44b3-a8bd-a17a7145409e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:30:59.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9585" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":89,"skipped":1496,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-8699-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":94,"skipped":1763,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-5283" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":90,"skipped":1505,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:31:33.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5806" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":91,"skipped":1514,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:31:33.598: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-33c0909b-e573-442c-9f41-5e7d73b5b22d
    STEP: Creating a pod to test consume secrets
    Sep 12 21:31:33.641: INFO: Waiting up to 5m0s for pod "pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02" in namespace "secrets-7740" to be "Succeeded or Failed"

    Sep 12 21:31:33.643: INFO: Pod "pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.597228ms
    Sep 12 21:31:35.648: INFO: Pod "pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006938764s
    STEP: Saw pod success
    Sep 12 21:31:35.648: INFO: Pod "pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02" satisfied condition "Succeeded or Failed"

    Sep 12 21:31:35.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 12 21:31:35.665: INFO: Waiting for pod pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02 to disappear
    Sep 12 21:31:35.668: INFO: Pod pod-secrets-558b8b81-4a1a-4443-8d09-db2d5d6acb02 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:31:35.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7740" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1514,"failed":8,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:31:11.146: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 12 21:31:11.232: INFO: created pod
    Sep 12 21:31:11.232: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7197" to be "Succeeded or Failed"

    Sep 12 21:31:11.245: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.671333ms
    Sep 12 21:31:13.249: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017317967s
    STEP: Saw pod success
    Sep 12 21:31:13.250: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep 12 21:31:43.250: INFO: polling logs
    Sep 12 21:31:43.257: INFO: Pod logs: 
    2022/09/12 21:31:11 OK: Got token
    2022/09/12 21:31:11 validating with in-cluster discovery
    2022/09/12 21:31:11 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/12 21:31:11 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:31:43.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-7197" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":95,"skipped":1767,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 12 21:31:43.321: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-691cd0fb-def7-4aa2-ad0c-f57d43725b02
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 12 21:31:43.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4235" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":96,"skipped":1802,"failed":6,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    Sep 12 21:31:43.373: INFO: Running AfterSuite actions on all nodes
    
    
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-jc7w
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 12 21:31:35.723: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jc7w" in namespace "subpath-8056" to be "Succeeded or Failed"

    Sep 12 21:31:35.728: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312134ms
    Sep 12 21:31:37.732: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 2.008771704s
    Sep 12 21:31:39.736: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 4.012972539s
    Sep 12 21:31:41.742: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 6.019209579s
    Sep 12 21:31:43.747: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 8.02369993s
    Sep 12 21:31:45.752: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 10.028510372s
    Sep 12 21:31:47.756: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 12.032817692s
    Sep 12 21:31:49.761: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 14.037771927s
    Sep 12 21:31:51.769: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 16.045284436s
    Sep 12 21:31:53.775: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 18.052205826s
    Sep 12 21:31:55.783: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Running", Reason="", readiness=true. Elapsed: 20.059786284s
    Sep 12 21:31:57.789: INFO: Pod "pod-subpath-test-secret-jc7w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.065257182s
    STEP: Saw pod success
    Sep 12 21:31:57.789: INFO: Pod "pod-subpath-test-secret-jc7w" satisfied condition "Succeeded or Failed"

    Sep 12 21:31:57.791: INFO: Trying to get logs from node k8s-upgrade-and-conformance-6izh7i-worker-938c6l pod pod-subpath-test-secret-jc7w container test-container-subpath-secret-jc7w: <nil>
    STEP: delete the pod
    Sep 12 21:31:57.806: INFO: Waiting for pod pod-subpath-test-secret-jc7w to disappear
    Sep 12 21:31:57.809: INFO: Pod pod-subpath-test-secret-jc7w no longer exists
    STEP: Deleting pod pod-subpath-test-secret-jc7w
    Sep 12 21:31:57.809: INFO: Deleting pod "pod-subpath-test-secret-jc7w" in namespace "subpath-8056"
... skipping 28 lines ...
    • [SLOW TEST:242.642 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":124,"skipped":2078,"failed":0}

    Sep 12 21:33:13.544: INFO: Running AfterSuite actions on all nodes
    
    STEP: Dumping logs from the "k8s-upgrade-and-conformance-6izh7i" workload cluster 09/12/22 21:33:43.025
    STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-n56wbd" namespace 09/12/22 21:33:46.284
    STEP: Deleting cluster k8s-upgrade-and-conformance-n56wbd/k8s-upgrade-and-conformance-6izh7i 09/12/22 21:33:46.585
    STEP: Deleting cluster k8s-upgrade-and-conformance-6izh7i 09/12/22 21:33:46.606
... skipping 621 lines ...
  [INTERRUPTED] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 1 of 21 Specs in 3547.002 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped


Ginkgo ran 1 suite in 1h0m14.348089191s

Test Suite Failed
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25654
++ pgrep -f 'ctr -n moby events'
+ kill 25655
... skipping 23 lines ...