This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-21 20:32
Elapsed1h7m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 901 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 78 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading k8s.io/apimachinery v0.25.0
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading github.com/onsi/gomega v1.20.0
... skipping 228 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-kcibnj-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-kcibnj-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-kcibnj created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-kcibnj-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-kcibnj-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh, Cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg, Cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4, Cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-kcibnj-mp-0, Cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/21/22 20:42:59.156
    INFO: Creating namespace k8s-upgrade-and-conformance-tlj9bs
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-tlj9bs"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep 21 20:52:19.443: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 21 20:52:19.447: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep 21 20:52:19.461: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep 21 20:52:19.506: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:19.506: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:19.506: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:19.506: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:19.506: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:19.506: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep 21 20:52:19.506: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:19.506: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:19.506: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:19.506: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:19.506: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:19.506: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:19.506: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:19.506: INFO: 
    Sep 21 20:52:21.532: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:21.532: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:21.532: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:21.533: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:21.533: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:21.533: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep 21 20:52:21.533: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:21.533: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:21.533: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:21.533: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:21.533: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:21.533: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:21.533: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:21.533: INFO: 
    Sep 21 20:52:23.542: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:23.542: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:23.542: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:23.542: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:23.542: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:23.542: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep 21 20:52:23.542: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:23.542: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:23.542: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:23.542: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:23.542: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:23.542: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:23.542: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:23.543: INFO: 
    Sep 21 20:52:25.531: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:25.531: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:25.531: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:25.531: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:25.531: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:25.531: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep 21 20:52:25.531: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:25.531: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:25.531: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:25.531: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:25.531: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:25.532: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:25.532: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:25.532: INFO: 
    Sep 21 20:52:27.529: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:27.529: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:27.529: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:27.529: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:27.529: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:27.529: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep 21 20:52:27.529: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:27.529: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:27.529: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:27.529: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:27.529: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:27.529: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:27.529: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:27.529: INFO: 
    Sep 21 20:52:29.531: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:29.531: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:29.531: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:29.531: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:29.531: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:29.531: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep 21 20:52:29.531: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:29.532: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:29.532: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:29.532: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:29.532: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:29.532: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:29.532: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:29.532: INFO: 
    Sep 21 20:52:31.535: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:31.535: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:31.535: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:31.535: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:31.535: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:31.535: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep 21 20:52:31.535: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:31.535: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:31.535: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:31.535: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:31.535: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:31.535: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:31.535: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:31.535: INFO: 
    Sep 21 20:52:33.531: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:33.531: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:33.531: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:33.531: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:33.531: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:33.531: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep 21 20:52:33.531: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:33.531: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:33.531: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:33.531: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:33.531: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:33.531: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:33.531: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:33.531: INFO: 
    Sep 21 20:52:35.531: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:35.531: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:35.531: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:35.531: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:35.531: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:35.531: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep 21 20:52:35.531: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:35.531: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:35.531: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:35.531: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:35.531: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:35.531: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:35.531: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:35.531: INFO: 
    Sep 21 20:52:37.534: INFO: The status of Pod coredns-558bd4d5db-lh94t is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:37.534: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:37.534: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:37.534: INFO: The status of Pod kube-proxy-bz7zc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:37.534: INFO: The status of Pod kube-proxy-kkcqg is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:37.534: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep 21 20:52:37.534: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:37.534: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 21 20:52:37.534: INFO: coredns-558bd4d5db-lh94t  k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:43 +0000 UTC  }]
    Sep 21 20:52:37.534: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:37.534: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
    Sep 21 20:52:37.534: INFO: kube-proxy-bz7zc          k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:09 +0000 UTC  }]
    Sep 21 20:52:37.534: INFO: kube-proxy-kkcqg          k8s-upgrade-and-conformance-kcibnj-worker-zvep6l  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:49:22 +0000 UTC  }]
    Sep 21 20:52:37.534: INFO: 
    Sep 21 20:52:39.567: INFO: The status of Pod coredns-558bd4d5db-85c57 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:39.568: INFO: The status of Pod kindnet-68mmq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:39.568: INFO: The status of Pod kindnet-8blk9 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 21 20:52:39.568: INFO: 15 / 18 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
    Sep 21 20:52:39.568: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 21 20:52:39.568: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 21 20:52:39.568: INFO: coredns-558bd4d5db-85c57  k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:52:39 +0000 UTC  }]
    Sep 21 20:52:39.568: INFO: kindnet-68mmq             k8s-upgrade-and-conformance-kcibnj-worker-zvep6l                Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:58 +0000 UTC  }]
    Sep 21 20:52:39.568: INFO: kindnet-8blk9             k8s-upgrade-and-conformance-kcibnj-worker-sbs9ap                Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:51:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:45:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:44:42 +0000 UTC  }]
... skipping 46 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:41.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-8985" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:41.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7911" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-7580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    W0921 20:52:41.647210      14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    Sep 21 20:52:41.647: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep 21 20:52:41.665: INFO: Waiting up to 5m0s for pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae" in namespace "containers-5702" to be "Succeeded or Failed"

    Sep 21 20:52:41.674: INFO: Pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae": Phase="Pending", Reason="", readiness=false. Elapsed: 9.031167ms
    Sep 21 20:52:43.684: INFO: Pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018895306s
    Sep 21 20:52:45.691: INFO: Pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026002583s
    Sep 21 20:52:47.698: INFO: Pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032746247s
    STEP: Saw pod success
    Sep 21 20:52:47.698: INFO: Pod "client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae" satisfied condition "Succeeded or Failed"

    Sep 21 20:52:47.703: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 20:52:47.737: INFO: Waiting for pod client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae to disappear
    Sep 21 20:52:47.743: INFO: Pod client-containers-946cb526-293c-4a4f-99fb-23e14a2569ae no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:47.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-5702" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    Sep 21 20:52:45.794: INFO: The status of Pod pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1 is Running (Ready = true)
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep 21 20:52:46.322: INFO: Successfully updated pod "pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1"
    Sep 21 20:52:46.322: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1" in namespace "pods-4075" to be "terminated due to deadline exceeded"
    Sep 21 20:52:46.539: INFO: Pod "pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1": Phase="Running", Reason="", readiness=true. Elapsed: 216.74038ms
    Sep 21 20:52:48.544: INFO: Pod "pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.222234196s

    Sep 21 20:52:48.544: INFO: Pod "pod-update-activedeadlineseconds-04c83010-7678-4375-b3e2-c8b653f547a1" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:48.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:54.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7147" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:52:55.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-2599" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:52:54.506: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-4d8100ff-4d04-44f2-9700-4ce796491149
    STEP: Creating a pod to test consume secrets
    Sep 21 20:52:54.571: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d" in namespace "projected-7072" to be "Succeeded or Failed"

    Sep 21 20:52:54.578: INFO: Pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228586ms
    Sep 21 20:52:56.584: INFO: Pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011937874s
    Sep 21 20:52:58.787: INFO: Pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214538779s
    Sep 21 20:53:00.794: INFO: Pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221728177s
    STEP: Saw pod success
    Sep 21 20:53:00.794: INFO: Pod "pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d" satisfied condition "Succeeded or Failed"

    Sep 21 20:53:00.799: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 20:53:00.848: INFO: Waiting for pod pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d to disappear
    Sep 21 20:53:00.853: INFO: Pod pod-projected-secrets-b7b10f8e-0204-4e42-82a3-1a79ff42581d no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:53:00.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7072" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:53:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8696" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:53:07.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-1128" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:53:08.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1247" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":4,"skipped":59,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:53:49.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-4913" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:02.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-6648" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep 21 20:54:02.079: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-4241  a41b7cf0-82aa-4054-98e4-891cacbc7be4 3225 3 2022-09-21 20:53:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8c3baf12-dc3c-4876-b95a-1a62dc3ad3d8 0xc002d3c607 0xc002d3c608}] []  [{kube-controller-manager Update apps/v1 2022-09-21 20:53:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3baf12-dc3c-4876-b95a-1a62dc3ad3d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d3c688 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep 21 20:54:02.079: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep 21 20:54:02.080: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-4241  b157a298-5c1e-4540-ade3-663606975872 3223 3 2022-09-21 20:53:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8c3baf12-dc3c-4876-b95a-1a62dc3ad3d8 0xc002d3c6e7 0xc002d3c6e8}] []  [{kube-controller-manager Update apps/v1 2022-09-21 20:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3baf12-dc3c-4876-b95a-1a62dc3ad3d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d3c758 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep 21 20:54:02.152: INFO: Pod "webserver-deployment-795d758f88-6fs7s" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-6fs7s webserver-deployment-795d758f88- deployment-4241  172407a9-d41f-4724-8740-62c16c258726 3229 0 2022-09-21 20:53:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a41b7cf0-82aa-4054-98e4-891cacbc7be4 0xc002d3cbc0 0xc002d3cbc1}] []  [{kube-controller-manager Update v1 2022-09-21 20:53:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a41b7cf0-82aa-4054-98e4-891cacbc7be4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-21 20:54:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.8\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n9957,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9957,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.8,StartTime:2022-09-21 20:53:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 21 20:54:02.153: INFO: Pod "webserver-deployment-795d758f88-9w2d4" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-9w2d4 webserver-deployment-795d758f88- deployment-4241  5be3e193-0087-46d6-92fc-167554115665 3247 0 2022-09-21 20:54:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a41b7cf0-82aa-4054-98e4-891cacbc7be4 0xc002d3cdc0 0xc002d3cdc1}] []  [{kube-controller-manager Update v1 2022-09-21 20:54:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a41b7cf0-82aa-4054-98e4-891cacbc7be4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h59zx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h59zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-kcibnj-worker-13tw3l,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:54:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 21 20:54:02.153: INFO: Pod "webserver-deployment-795d758f88-frl9w" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-frl9w webserver-deployment-795d758f88- deployment-4241  705b330d-821e-4277-8e4e-c06e811ee532 3250 0 2022-09-21 20:54:02 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a41b7cf0-82aa-4054-98e4-891cacbc7be4 0xc002d3cf20 0xc002d3cf21}] []  [{kube-controller-manager Update v1 2022-09-21 20:54:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a41b7cf0-82aa-4054-98e4-891cacbc7be4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ljvj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljvj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:54:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 21 20:54:02.154: INFO: Pod "webserver-deployment-795d758f88-gnt9r" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gnt9r webserver-deployment-795d758f88- deployment-4241  80ebfcf6-6567-4b46-9627-955efa514905 3174 0 2022-09-21 20:53:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a41b7cf0-82aa-4054-98e4-891cacbc7be4 0xc002d3d080 0xc002d3d081}] []  [{kube-controller-manager Update v1 2022-09-21 20:53:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a41b7cf0-82aa-4054-98e4-891cacbc7be4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-21 20:53:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4kk7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4kk7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-kcibnj-worker-f3twbs,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-21 20:53:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2022-09-21 20:53:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4241" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":6,"skipped":119,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:09.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7448" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:12.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1793" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":7,"skipped":127,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:13.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-6348" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":5,"skipped":99,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 20:54:12.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746" in namespace "downward-api-5283" to be "Succeeded or Failed"

    Sep 21 20:54:12.891: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746": Phase="Pending", Reason="", readiness=false. Elapsed: 5.169116ms
    Sep 21 20:54:14.916: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030738993s
    Sep 21 20:54:16.922: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036433441s
    Sep 21 20:54:18.937: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051218564s
    Sep 21 20:54:20.944: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058221614s
    STEP: Saw pod success
    Sep 21 20:54:20.944: INFO: Pod "downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746" satisfied condition "Succeeded or Failed"

    Sep 21 20:54:20.951: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746 container client-container: <nil>
    STEP: delete the pod
    Sep 21 20:54:21.006: INFO: Waiting for pod downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746 to disappear
    Sep 21 20:54:21.010: INFO: Pod downwardapi-volume-5e40643a-2fd3-4286-8ad8-becb05975746 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:21.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5283" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":173,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:54:21.073: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-191/secret-test-2cff4dc7-ce0c-4b0c-934e-cad8321e3f5f
    STEP: Creating a pod to test consume secrets
    Sep 21 20:54:21.201: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a" in namespace "secrets-191" to be "Succeeded or Failed"

    Sep 21 20:54:21.211: INFO: Pod "pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.353533ms
    Sep 21 20:54:23.219: INFO: Pod "pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018326996s
    STEP: Saw pod success
    Sep 21 20:54:23.220: INFO: Pod "pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a" satisfied condition "Succeeded or Failed"

    Sep 21 20:54:23.225: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a container env-test: <nil>
    STEP: delete the pod
    Sep 21 20:54:23.276: INFO: Waiting for pod pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a to disappear
    Sep 21 20:54:23.282: INFO: Pod pod-configmaps-f3c46cc4-6c7f-483e-8892-2028769c115a no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:23.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-191" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":182,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep 21 20:54:25.787: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep 21 20:54:25.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7377 describe pod agnhost-primary-k5qql'
    Sep 21 20:54:26.026: INFO: stderr: ""
    Sep 21 20:54:26.026: INFO: stdout: "Name:         agnhost-primary-k5qql\nNamespace:    kubectl-7377\nPriority:     0\nNode:         k8s-upgrade-and-conformance-kcibnj-worker-13tw3l/172.18.0.5\nStart Time:   Wed, 21 Sep 2022 20:54:24 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.6.15\nIPs:\n  IP:           192.168.6.15\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://77c9d000adea42296762ed641fce286c3d4eafdd7d093e8347aaa19570ebac99\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 21 Sep 2022 20:54:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6tntk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-6tntk:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-7377/agnhost-primary-k5qql to k8s-upgrade-and-conformance-kcibnj-worker-13tw3l\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
    Sep 21 20:54:26.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7377 describe rc agnhost-primary'
    Sep 21 20:54:26.323: INFO: stderr: ""
    Sep 21 20:54:26.323: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-7377\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-k5qql\n"

    Sep 21 20:54:26.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7377 describe service agnhost-primary'
    Sep 21 20:54:26.647: INFO: stderr: ""
    Sep 21 20:54:26.647: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-7377\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.142.235.189\nIPs:               10.142.235.189\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.6.15:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep 21 20:54:26.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7377 describe node k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh'
    Sep 21 20:54:26.930: INFO: stderr: ""
    Sep 21 20:54:26.930: INFO: stdout: "Name:               k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-kcibnj\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-tlj9bs\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-kcibnj-jnvjm\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 21 Sep 2022 20:46:05 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\n  AcquireTime:     <unset>\n  RenewTime:       Wed, 21 Sep 2022 20:54:26 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 21 Sep 2022 20:51:48 +0000   Wed, 21 Sep 2022 20:46:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 21 Sep 2022 20:51:48 +0000   Wed, 21 Sep 2022 20:46:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 21 Sep 2022 20:51:48 +0000   Wed, 21 Sep 2022 20:46:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 21 Sep 2022 20:51:48 +0000   Wed, 21 Sep 2022 20:46:46 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 47276d20188148eda42602a620787e71\n  System UUID:                7e5c2b6f-4b00-4bd2-ab3f-ade4446b4d31\n  Boot ID:                    88b70aea-6fc1-4144-9137-6686749f7b00\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m16s\n  kube-system                 kindnet-72h7l                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m21s\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m20s\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m20s\n  kube-system                 kube-proxy-z57d8                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m20s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                    From        Message\n  ----     ------                    ----                   ----        -------\n  Normal   Starting                  8m21s                  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity       8m21s                  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory   8m21s (x2 over 8m21s)  kubelet     Node k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     8m21s (x2 over 8m21s)  kubelet     Node k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      8m21s (x2 over 8m21s)  kubelet     Node k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   8m21s                  kubelet     Updated Node Allocatable limit across pods\n  Warning  CheckLimitsForResolvConf  8m21s                  kubelet     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  8m                     kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                 7m40s                  kubelet     Node k8s-upgrade-and-conformance-kcibnj-jnvjm-h7vhh status is now: NodeReady\n  Normal   Starting                  5m11s                  kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7377" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":10,"skipped":184,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:27.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7266" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:41.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-3470" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":243,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:44.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-4171" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":13,"skipped":282,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:54:44.793: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-272476f6-c005-4562-adfc-d0192d7585c4
    STEP: Creating a pod to test consume configMaps
    Sep 21 20:54:44.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b" in namespace "projected-2580" to be "Succeeded or Failed"

    Sep 21 20:54:44.921: INFO: Pod "pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.692654ms
    Sep 21 20:54:46.928: INFO: Pod "pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023365151s
    STEP: Saw pod success
    Sep 21 20:54:46.929: INFO: Pod "pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b" satisfied condition "Succeeded or Failed"

    Sep 21 20:54:46.936: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 20:54:46.971: INFO: Waiting for pod pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b to disappear
    Sep 21 20:54:46.978: INFO: Pod pod-projected-configmaps-e1dfe91a-7d0c-4f59-8cef-80accae0329b no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:46.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":314,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:54:13.620: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 20:54:13.838: INFO: created pod
    Sep 21 20:54:13.839: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2441" to be "Succeeded or Failed"

    Sep 21 20:54:13.856: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 17.41086ms
    Sep 21 20:54:15.866: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.026936628s
    Sep 21 20:54:17.914: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075842036s
    STEP: Saw pod success
    Sep 21 20:54:17.915: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep 21 20:54:47.915: INFO: polling logs
    Sep 21 20:54:47.928: INFO: Pod logs: 
    2022/09/21 20:54:15 OK: Got token
    2022/09/21 20:54:15 validating with in-cluster discovery
    2022/09/21 20:54:15 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/21 20:54:15 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:47.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-2441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":6,"skipped":100,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:50.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-8572" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":7,"skipped":142,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:54:59.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-1155" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":163,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:54:59.259: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-3549/configmap-test-1fff290d-74ac-48dc-9343-3ef34db8fd6c
    STEP: Creating a pod to test consume configMaps
    Sep 21 20:54:59.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1" in namespace "configmap-3549" to be "Succeeded or Failed"

    Sep 21 20:54:59.352: INFO: Pod "pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406403ms
    Sep 21 20:55:01.359: INFO: Pod "pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015932307s
    STEP: Saw pod success
    Sep 21 20:55:01.359: INFO: Pod "pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:01.365: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1 container env-test: <nil>
    STEP: delete the pod
    Sep 21 20:55:01.394: INFO: Waiting for pod pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1 to disappear
    Sep 21 20:55:01.402: INFO: Pod pod-configmaps-b065b0c7-7ecb-4bb5-a1be-32970adf84f1 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:01.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3549" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":171,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:01.426: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep 21 20:55:01.487: INFO: Waiting up to 5m0s for pod "var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549" in namespace "var-expansion-4786" to be "Succeeded or Failed"

    Sep 21 20:55:01.493: INFO: Pod "var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130725ms
    Sep 21 20:55:03.500: INFO: Pod "var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012352912s
    STEP: Saw pod success
    Sep 21 20:55:03.500: INFO: Pod "var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:03.504: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549 container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 20:55:03.533: INFO: Waiting for pod var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549 to disappear
    Sep 21 20:55:03.540: INFO: Pod var-expansion-f499ba8d-4988-4bc5-bb58-a0aa4d845549 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:03.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-4786" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":172,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:05.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":185,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:05.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-3795" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":12,"skipped":193,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
    Sep 21 20:54:54.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:07.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-6884" for this suite.
    STEP: Destroying namespace "webhook-6884-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":15,"skipped":330,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:05.892: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-8842a62e-b1a4-45cc-b7ce-10dbee8f26bb
    STEP: Creating a pod to test consume secrets
    Sep 21 20:55:05.977: INFO: Waiting up to 5m0s for pod "pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a" in namespace "secrets-3070" to be "Succeeded or Failed"

    Sep 21 20:55:05.992: INFO: Pod "pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.939843ms
    Sep 21 20:55:08.000: INFO: Pod "pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022800245s
    STEP: Saw pod success
    Sep 21 20:55:08.000: INFO: Pod "pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:08.006: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 20:55:08.036: INFO: Waiting for pod pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a to disappear
    Sep 21 20:55:08.043: INFO: Pod pod-secrets-d72bed18-065f-4bb1-bbb0-095defa9b20a no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:08.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3070" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":197,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:09.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6274" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":14,"skipped":235,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 20:55:07.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb" in namespace "downward-api-26" to be "Succeeded or Failed"

    Sep 21 20:55:07.741: INFO: Pod "downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.7044ms
    Sep 21 20:55:09.754: INFO: Pod "downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026659779s
    STEP: Saw pod success
    Sep 21 20:55:09.754: INFO: Pod "downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:09.775: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb container client-container: <nil>
    STEP: delete the pod
    Sep 21 20:55:09.812: INFO: Waiting for pod downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb to disappear
    Sep 21 20:55:09.823: INFO: Pod downwardapi-volume-3ff084ae-b511-4616-84ca-768ce2f7f7eb no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:09.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-26" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":370,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:09.725: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-5f0142f8-cead-4a87-b3fe-142409cd0d76
    STEP: Creating a pod to test consume secrets
    Sep 21 20:55:09.838: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49" in namespace "projected-7482" to be "Succeeded or Failed"

    Sep 21 20:55:09.845: INFO: Pod "pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972711ms
    Sep 21 20:55:11.854: INFO: Pod "pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015977056s
    STEP: Saw pod success
    Sep 21 20:55:11.854: INFO: Pod "pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:11.863: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 20:55:11.902: INFO: Waiting for pod pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49 to disappear
    Sep 21 20:55:11.908: INFO: Pod pod-projected-secrets-11c2207c-1d77-435d-96b3-68f93a34de49 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:11.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7482" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":240,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:09.958: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 20:55:10.041: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3cf8f976-916f-49f7-87de-c629be0012e5" in namespace "security-context-test-5694" to be "Succeeded or Failed"

    Sep 21 20:55:10.048: INFO: Pod "alpine-nnp-false-3cf8f976-916f-49f7-87de-c629be0012e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.903431ms
    Sep 21 20:55:12.061: INFO: Pod "alpine-nnp-false-3cf8f976-916f-49f7-87de-c629be0012e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019742072s
    Sep 21 20:55:14.069: INFO: Pod "alpine-nnp-false-3cf8f976-916f-49f7-87de-c629be0012e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027452438s
    Sep 21 20:55:14.069: INFO: Pod "alpine-nnp-false-3cf8f976-916f-49f7-87de-c629be0012e5" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:14.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-5694" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":402,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:14.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":18,"skipped":434,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:14.444: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6008" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":19,"skipped":444,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:22.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9380" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":20,"skipped":448,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:52:47.847: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep 21 20:54:48.435: INFO: Successfully updated pod "var-expansion-7db6372a-367c-451b-9d6b-7ecfd9a2dce3"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep 21 20:54:50.460: INFO: Deleting pod "var-expansion-7db6372a-367c-451b-9d6b-7ecfd9a2dce3" in namespace "var-expansion-2572"
    Sep 21 20:54:50.493: INFO: Wait up to 5m0s for pod "var-expansion-7db6372a-367c-451b-9d6b-7ecfd9a2dce3" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:162.673 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:32.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1633" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":21,"skipped":465,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:38.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-173" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":22,"skipped":473,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-4074-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":23,"skipped":528,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:55:42.574: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 21 20:55:42.671: INFO: Waiting up to 5m0s for pod "pod-78f4d2de-85d2-4bd4-a111-18441681ff24" in namespace "emptydir-4364" to be "Succeeded or Failed"

    Sep 21 20:55:42.683: INFO: Pod "pod-78f4d2de-85d2-4bd4-a111-18441681ff24": Phase="Pending", Reason="", readiness=false. Elapsed: 12.685274ms
    Sep 21 20:55:44.690: INFO: Pod "pod-78f4d2de-85d2-4bd4-a111-18441681ff24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019463177s
    Sep 21 20:55:46.697: INFO: Pod "pod-78f4d2de-85d2-4bd4-a111-18441681ff24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026361195s
    STEP: Saw pod success
    Sep 21 20:55:46.698: INFO: Pod "pod-78f4d2de-85d2-4bd4-a111-18441681ff24" satisfied condition "Succeeded or Failed"

    Sep 21 20:55:46.705: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-78f4d2de-85d2-4bd4-a111-18441681ff24 container test-container: <nil>
    STEP: delete the pod
    Sep 21 20:55:46.732: INFO: Waiting for pod pod-78f4d2de-85d2-4bd4-a111-18441681ff24 to disappear
    Sep 21 20:55:46.737: INFO: Pod pod-78f4d2de-85d2-4bd4-a111-18441681ff24 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:55:46.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4364" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":572,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-2322" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":25,"skipped":624,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:56:40.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9836" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":261,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:56:41.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-5497" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":17,"skipped":303,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 20:56:41.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2" in namespace "downward-api-6323" to be "Succeeded or Failed"

    Sep 21 20:56:41.617: INFO: Pod "downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131231ms
    Sep 21 20:56:43.622: INFO: Pod "downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017139603s
    STEP: Saw pod success
    Sep 21 20:56:43.622: INFO: Pod "downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2" satisfied condition "Succeeded or Failed"

    Sep 21 20:56:43.631: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2 container client-container: <nil>
    STEP: delete the pod
    Sep 21 20:56:43.676: INFO: Waiting for pod downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2 to disappear
    Sep 21 20:56:43.681: INFO: Pod downwardapi-volume-ff123001-cf1e-4b6e-8a20-ba6c74dc8ea2 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:56:43.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6323" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":354,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-302-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":19,"skipped":359,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1398-crds.webhook.example.com via the AdmissionRegistration API
    Sep 21 20:57:07.530: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:57:17.646: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:57:27.750: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:57:37.847: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:57:47.864: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:57:47.865: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 20:57:47.865: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 23 lines ...
    • [SLOW TEST:300.103 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":5,"skipped":84,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:09.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-3135" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":6,"skipped":90,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:58:09.176: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-9eceb742-a68d-48a8-a6a6-c56ce8dbb6a1
    STEP: Creating a pod to test consume configMaps
    Sep 21 20:58:09.266: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442" in namespace "configmap-8336" to be "Succeeded or Failed"

    Sep 21 20:58:09.272: INFO: Pod "pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442": Phase="Pending", Reason="", readiness=false. Elapsed: 5.937868ms
    Sep 21 20:58:11.279: INFO: Pod "pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012803148s
    STEP: Saw pod success
    Sep 21 20:58:11.279: INFO: Pod "pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:11.284: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 20:58:11.312: INFO: Waiting for pod pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442 to disappear
    Sep 21 20:58:11.315: INFO: Pod pod-configmaps-2fbe6597-dba8-4d0d-b89b-21d10ff69442 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:11.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8336" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":93,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 20:58:11.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9" in namespace "downward-api-4135" to be "Succeeded or Failed"

    Sep 21 20:58:11.417: INFO: Pod "downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.299546ms
    Sep 21 20:58:13.430: INFO: Pod "downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018971594s
    STEP: Saw pod success
    Sep 21 20:58:13.431: INFO: Pod "downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:13.437: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9 container client-container: <nil>
    STEP: delete the pod
    Sep 21 20:58:13.501: INFO: Waiting for pod downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9 to disappear
    Sep 21 20:58:13.507: INFO: Pod downwardapi-volume-78c740a7-2d72-4657-8182-58d502f847b9 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:13.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4135" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:23.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-3156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":9,"skipped":103,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-943" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":10,"skipped":108,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:58:24.883: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-a0d40d91-4411-4360-99a6-075b7683ea15
    STEP: Creating a pod to test consume secrets
    Sep 21 20:58:24.966: INFO: Waiting up to 5m0s for pod "pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582" in namespace "secrets-9378" to be "Succeeded or Failed"

    Sep 21 20:58:24.972: INFO: Pod "pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582": Phase="Pending", Reason="", readiness=false. Elapsed: 5.635755ms
    Sep 21 20:58:26.985: INFO: Pod "pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018451028s
    STEP: Saw pod success
    Sep 21 20:58:26.985: INFO: Pod "pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:26.993: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582 container secret-env-test: <nil>
    STEP: delete the pod
    Sep 21 20:58:27.027: INFO: Waiting for pod pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582 to disappear
    Sep 21 20:58:27.034: INFO: Pod pod-secrets-2a62ea66-012b-4c8f-9af0-3a9bea4c3582 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:27.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9378" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":157,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":19,"skipped":377,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:57:48.529: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9767-crds.webhook.example.com via the AdmissionRegistration API
    Sep 21 20:58:03.526: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:58:13.651: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:58:23.772: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:58:33.846: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:58:43.858: INFO: Waiting for webhook configuration to be ready...
    Sep 21 20:58:43.858: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 20:58:43.858: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-9qhx
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 21 20:58:27.252: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9qhx" in namespace "subpath-7557" to be "Succeeded or Failed"

    Sep 21 20:58:27.260: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.872368ms
    Sep 21 20:58:29.268: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 2.015977642s
    Sep 21 20:58:31.274: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 4.022031578s
    Sep 21 20:58:33.280: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 6.027902879s
    Sep 21 20:58:35.286: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 8.033374302s
    Sep 21 20:58:37.292: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 10.039559873s
    Sep 21 20:58:39.297: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 12.045066972s
    Sep 21 20:58:41.305: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 14.052246694s
    Sep 21 20:58:43.312: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 16.06004085s
    Sep 21 20:58:45.318: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 18.06530605s
    Sep 21 20:58:47.323: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Running", Reason="", readiness=true. Elapsed: 20.070355738s
    Sep 21 20:58:49.331: INFO: Pod "pod-subpath-test-downwardapi-9qhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.078963919s
    STEP: Saw pod success
    Sep 21 20:58:49.331: INFO: Pod "pod-subpath-test-downwardapi-9qhx" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:49.337: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-subpath-test-downwardapi-9qhx container test-container-subpath-downwardapi-9qhx: <nil>
    STEP: delete the pod
    Sep 21 20:58:49.379: INFO: Waiting for pod pod-subpath-test-downwardapi-9qhx to disappear
    Sep 21 20:58:49.384: INFO: Pod pod-subpath-test-downwardapi-9qhx no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-9qhx
    Sep 21 20:58:49.384: INFO: Deleting pod "pod-subpath-test-downwardapi-9qhx" in namespace "subpath-7557"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:49.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-7557" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":166,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":19,"skipped":377,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:58:44.490: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
    STEP: Destroying namespace "webhook-5337-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":20,"skipped":377,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:58:51.359: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 21 20:58:51.446: INFO: Waiting up to 5m0s for pod "downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40" in namespace "downward-api-5779" to be "Succeeded or Failed"

    Sep 21 20:58:51.453: INFO: Pod "downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.730694ms
    Sep 21 20:58:53.459: INFO: Pod "downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013613994s
    STEP: Saw pod success
    Sep 21 20:58:53.460: INFO: Pod "downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:53.463: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40 container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 20:58:53.479: INFO: Waiting for pod downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40 to disappear
    Sep 21 20:58:53.483: INFO: Pod downward-api-cfaa0d52-3e21-4851-bd29-493c60e07a40 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:53.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":399,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:58:53.588: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 21 20:58:53.630: INFO: Waiting up to 5m0s for pod "pod-9f5f3414-0d75-49c4-8f86-4b32d736a374" in namespace "emptydir-5587" to be "Succeeded or Failed"

    Sep 21 20:58:53.634: INFO: Pod "pod-9f5f3414-0d75-49c4-8f86-4b32d736a374": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630516ms
    Sep 21 20:58:55.639: INFO: Pod "pod-9f5f3414-0d75-49c4-8f86-4b32d736a374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008407574s
    STEP: Saw pod success
    Sep 21 20:58:55.639: INFO: Pod "pod-9f5f3414-0d75-49c4-8f86-4b32d736a374" satisfied condition "Succeeded or Failed"

    Sep 21 20:58:55.643: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-9f5f3414-0d75-49c4-8f86-4b32d736a374 container test-container: <nil>
    STEP: delete the pod
    Sep 21 20:58:55.660: INFO: Waiting for pod pod-9f5f3414-0d75-49c4-8f86-4b32d736a374 to disappear
    Sep 21 20:58:55.663: INFO: Pod pod-9f5f3414-0d75-49c4-8f86-4b32d736a374 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:58:55.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5587" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":441,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:05.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3057" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":13,"skipped":181,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:05.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-867" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":14,"skipped":202,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:08.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":274,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:11.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4258" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":23,"skipped":450,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:18.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4820" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":451,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2505" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":25,"skipped":467,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:20.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2402" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":26,"skipped":469,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:59:20.415: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 21 20:59:20.468: INFO: Waiting up to 5m0s for pod "pod-85809620-966e-489f-974b-27b020076a34" in namespace "emptydir-5524" to be "Succeeded or Failed"

    Sep 21 20:59:20.474: INFO: Pod "pod-85809620-966e-489f-974b-27b020076a34": Phase="Pending", Reason="", readiness=false. Elapsed: 5.283252ms
    Sep 21 20:59:22.479: INFO: Pod "pod-85809620-966e-489f-974b-27b020076a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010050095s
    STEP: Saw pod success
    Sep 21 20:59:22.479: INFO: Pod "pod-85809620-966e-489f-974b-27b020076a34" satisfied condition "Succeeded or Failed"

    Sep 21 20:59:22.481: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-85809620-966e-489f-974b-27b020076a34 container test-container: <nil>
    STEP: delete the pod
    Sep 21 20:59:22.502: INFO: Waiting for pod pod-85809620-966e-489f-974b-27b020076a34 to disappear
    Sep 21 20:59:22.505: INFO: Pod pod-85809620-966e-489f-974b-27b020076a34 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:22.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5524" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":478,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-6mkg
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 21 20:59:08.142: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6mkg" in namespace "subpath-9745" to be "Succeeded or Failed"

    Sep 21 20:59:08.146: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.976758ms
    Sep 21 20:59:10.151: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 2.009157175s
    Sep 21 20:59:12.156: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 4.013850442s
    Sep 21 20:59:14.162: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 6.02031876s
    Sep 21 20:59:16.176: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 8.034178218s
    Sep 21 20:59:18.184: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 10.042018266s
    Sep 21 20:59:20.190: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 12.048558191s
    Sep 21 20:59:22.194: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 14.052554213s
    Sep 21 20:59:24.200: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 16.057812241s
    Sep 21 20:59:26.207: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 18.065164466s
    Sep 21 20:59:28.213: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Running", Reason="", readiness=true. Elapsed: 20.071019532s
    Sep 21 20:59:30.217: INFO: Pod "pod-subpath-test-projected-6mkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.07556077s
    STEP: Saw pod success
    Sep 21 20:59:30.217: INFO: Pod "pod-subpath-test-projected-6mkg" satisfied condition "Succeeded or Failed"

    Sep 21 20:59:30.220: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-subpath-test-projected-6mkg container test-container-subpath-projected-6mkg: <nil>
    STEP: delete the pod
    Sep 21 20:59:30.235: INFO: Waiting for pod pod-subpath-test-projected-6mkg to disappear
    Sep 21 20:59:30.238: INFO: Pod pod-subpath-test-projected-6mkg no longer exists
    STEP: Deleting pod pod-subpath-test-projected-6mkg
    Sep 21 20:59:30.238: INFO: Deleting pod "pod-subpath-test-projected-6mkg" in namespace "subpath-9745"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:30.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9745" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":276,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:32.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-4585" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":17,"skipped":297,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-785
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-785 to expose endpoints map[]
    Sep 21 20:59:32.490: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep 21 20:59:33.497: INFO: successfully validated that service endpoint-test2 in namespace services-785 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-785
    Sep 21 20:59:33.511: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 21 20:59:35.515: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-785 to expose endpoints map[pod1:[80]]
    Sep 21 20:59:35.534: INFO: successfully validated that service endpoint-test2 in namespace services-785 exposes endpoints map[pod1:[80]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-785" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":18,"skipped":321,"failed":0}

    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:59:37.715: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename watch
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:43.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-943" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":19,"skipped":321,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 20:59:43.366: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 21 20:59:43.429: INFO: Waiting up to 5m0s for pod "pod-f76b5406-8f05-43b8-aceb-1ef739519b3d" in namespace "emptydir-4190" to be "Succeeded or Failed"

    Sep 21 20:59:43.438: INFO: Pod "pod-f76b5406-8f05-43b8-aceb-1ef739519b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.963235ms
    Sep 21 20:59:45.443: INFO: Pod "pod-f76b5406-8f05-43b8-aceb-1ef739519b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013032868s
    STEP: Saw pod success
    Sep 21 20:59:45.443: INFO: Pod "pod-f76b5406-8f05-43b8-aceb-1ef739519b3d" satisfied condition "Succeeded or Failed"

    Sep 21 20:59:45.447: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-f76b5406-8f05-43b8-aceb-1ef739519b3d container test-container: <nil>
    STEP: delete the pod
    Sep 21 20:59:45.472: INFO: Waiting for pod pod-f76b5406-8f05-43b8-aceb-1ef739519b3d to disappear
    Sep 21 20:59:45.476: INFO: Pod pod-f76b5406-8f05-43b8-aceb-1ef739519b3d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4190" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":328,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 20:59:46.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6567" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":21,"skipped":347,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    • [SLOW TEST:314.158 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":26,"skipped":625,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:06.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-2046" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":687,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:17.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1012" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":28,"skipped":698,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:28.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":29,"skipped":702,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:28.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5734" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":30,"skipped":712,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:01:28.982: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep 21 21:01:29.031: INFO: Waiting up to 5m0s for pod "client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358" in namespace "containers-8786" to be "Succeeded or Failed"

    Sep 21 21:01:29.037: INFO: Pod "client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358": Phase="Pending", Reason="", readiness=false. Elapsed: 5.931028ms
    Sep 21 21:01:31.048: INFO: Pod "client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01739283s
    STEP: Saw pod success
    Sep 21 21:01:31.048: INFO: Pod "client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358" satisfied condition "Succeeded or Failed"

    Sep 21 21:01:31.054: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:01:31.082: INFO: Waiting for pod client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358 to disappear
    Sep 21 21:01:31.087: INFO: Pod client-containers-cfa4d234-64a1-4c33-b021-43bfc8e83358 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:31.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-8786" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":718,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 83 lines ...
    Sep 21 20:56:33.642: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
    Sep 21 20:56:33.642: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
    Sep 21 20:56:33.642: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
    
    Sep 21 20:56:33.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:56:34.060: INFO: rc: 1
    Sep 21 20:56:34.060: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    error: Internal error occurred: error executing command in container: failed to exec in container: container is in CONTAINER_EXITED state

    
    error:

    exit status 1
    Sep 21 20:56:44.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:56:44.315: INFO: rc: 1
    Sep 21 20:56:44.315: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:56:54.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:56:54.558: INFO: rc: 1
    Sep 21 20:56:54.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:04.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:04.766: INFO: rc: 1
    Sep 21 20:57:04.767: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:14.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:14.960: INFO: rc: 1
    Sep 21 20:57:14.960: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:24.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:25.178: INFO: rc: 1
    Sep 21 20:57:25.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:35.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:35.405: INFO: rc: 1
    Sep 21 20:57:35.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:45.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:45.607: INFO: rc: 1
    Sep 21 20:57:45.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:57:55.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:57:55.814: INFO: rc: 1
    Sep 21 20:57:55.814: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:05.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:06.042: INFO: rc: 1
    Sep 21 20:58:06.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:16.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:16.240: INFO: rc: 1
    Sep 21 20:58:16.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:26.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:26.491: INFO: rc: 1
    Sep 21 20:58:26.491: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:36.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:36.628: INFO: rc: 1
    Sep 21 20:58:36.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:46.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:46.742: INFO: rc: 1
    Sep 21 20:58:46.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:58:56.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:58:56.845: INFO: rc: 1
    Sep 21 20:58:56.845: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:06.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:06.955: INFO: rc: 1
    Sep 21 20:59:06.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:16.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:17.085: INFO: rc: 1
    Sep 21 20:59:17.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:27.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:27.184: INFO: rc: 1
    Sep 21 20:59:27.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:37.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:37.288: INFO: rc: 1
    Sep 21 20:59:37.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:47.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:47.440: INFO: rc: 1
    Sep 21 20:59:47.440: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 20:59:57.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 20:59:57.559: INFO: rc: 1
    Sep 21 20:59:57.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:07.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:07.673: INFO: rc: 1
    Sep 21 21:00:07.674: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:17.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:17.836: INFO: rc: 1
    Sep 21 21:00:17.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:27.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:28.001: INFO: rc: 1
    Sep 21 21:00:28.001: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:38.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:38.119: INFO: rc: 1
    Sep 21 21:00:38.119: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:48.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:48.246: INFO: rc: 1
    Sep 21 21:00:48.246: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:00:58.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:58.356: INFO: rc: 1
    Sep 21 21:00:58.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:01:08.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:08.477: INFO: rc: 1
    Sep 21 21:01:08.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:01:18.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:18.571: INFO: rc: 1
    Sep 21 21:01:18.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:01:28.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:28.688: INFO: rc: 1
    Sep 21 21:01:28.689: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-2" not found

    
    error:

    exit status 1
    Sep 21 21:01:38.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-3174 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:38.790: INFO: rc: 1
    Sep 21 21:01:38.790: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
    Sep 21 21:01:38.790: INFO: Scaling statefulset ss to 0
    STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":3,"skipped":56,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:38.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-1342" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":4,"skipped":79,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:01:38.997: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-d066b57b-facb-4855-b56b-f95e15ac1a41
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:01:39.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92" in namespace "configmap-8378" to be "Succeeded or Failed"

    Sep 21 21:01:39.040: INFO: Pod "pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941634ms
    Sep 21 21:01:41.044: INFO: Pod "pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007494743s
    STEP: Saw pod success
    Sep 21 21:01:41.044: INFO: Pod "pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92" satisfied condition "Succeeded or Failed"

    Sep 21 21:01:41.047: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:01:41.069: INFO: Waiting for pod pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92 to disappear
    Sep 21 21:01:41.072: INFO: Pod pod-configmaps-92885a25-ee1a-49d9-b33a-c62d16119d92 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8378" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":80,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:01:47.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-5843" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":723,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 64 lines ...
    STEP: Destroying namespace "services-6429" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":111,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:07.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-6776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":7,"skipped":112,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:22.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2988" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":122,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:02:22.149: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-26f366ce-6b62-4a4c-9805-082fd9eb9740
    STEP: Creating a pod to test consume secrets
    Sep 21 21:02:22.237: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9" in namespace "projected-692" to be "Succeeded or Failed"

    Sep 21 21:02:22.243: INFO: Pod "pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.67784ms
    Sep 21 21:02:24.249: INFO: Pod "pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012086614s
    STEP: Saw pod success
    Sep 21 21:02:24.249: INFO: Pod "pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9" satisfied condition "Succeeded or Failed"

    Sep 21 21:02:24.252: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:02:24.269: INFO: Waiting for pod pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9 to disappear
    Sep 21 21:02:24.272: INFO: Pod pod-projected-secrets-355d713f-99c2-4b00-97c8-1a0c35f35fc9 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:24.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-692" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":128,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 113 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:29.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-96" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":10,"skipped":180,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:02:29.681: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep 21 21:02:29.771: INFO: Waiting up to 5m0s for pod "pod-96aba873-4e2e-4fe3-9b11-adec8c07c036" in namespace "emptydir-180" to be "Succeeded or Failed"

    Sep 21 21:02:29.779: INFO: Pod "pod-96aba873-4e2e-4fe3-9b11-adec8c07c036": Phase="Pending", Reason="", readiness=false. Elapsed: 6.711846ms
    Sep 21 21:02:31.786: INFO: Pod "pod-96aba873-4e2e-4fe3-9b11-adec8c07c036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014194557s
    STEP: Saw pod success
    Sep 21 21:02:31.786: INFO: Pod "pod-96aba873-4e2e-4fe3-9b11-adec8c07c036" satisfied condition "Succeeded or Failed"

    Sep 21 21:02:31.793: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-96aba873-4e2e-4fe3-9b11-adec8c07c036 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:02:31.836: INFO: Waiting for pod pod-96aba873-4e2e-4fe3-9b11-adec8c07c036 to disappear
    Sep 21 21:02:31.841: INFO: Pod pod-96aba873-4e2e-4fe3-9b11-adec8c07c036 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:31.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-180" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":205,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:32.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-771" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":12,"skipped":213,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:02:32.987: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 21 21:02:33.041: INFO: Waiting up to 5m0s for pod "pod-6624abe0-2468-4c18-9eec-59cae906017d" in namespace "emptydir-2857" to be "Succeeded or Failed"

    Sep 21 21:02:33.048: INFO: Pod "pod-6624abe0-2468-4c18-9eec-59cae906017d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.517816ms
    Sep 21 21:02:35.056: INFO: Pod "pod-6624abe0-2468-4c18-9eec-59cae906017d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015514861s
    STEP: Saw pod success
    Sep 21 21:02:35.056: INFO: Pod "pod-6624abe0-2468-4c18-9eec-59cae906017d" satisfied condition "Succeeded or Failed"

    Sep 21 21:02:35.063: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-6624abe0-2468-4c18-9eec-59cae906017d container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:02:35.104: INFO: Waiting for pod pod-6624abe0-2468-4c18-9eec-59cae906017d to disappear
    Sep 21 21:02:35.111: INFO: Pod pod-6624abe0-2468-4c18-9eec-59cae906017d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2857" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":223,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:53.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-4225" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":232,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-1819-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":15,"skipped":262,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:57.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-1231" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":16,"skipped":269,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:02:57.396: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-9f8dbbbe-4448-45c9-94fa-9e1b15b72ae5
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:02:57.464: INFO: Waiting up to 5m0s for pod "pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb" in namespace "configmap-7266" to be "Succeeded or Failed"

    Sep 21 21:02:57.476: INFO: Pod "pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.661715ms
    Sep 21 21:02:59.482: INFO: Pod "pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017217092s
    STEP: Saw pod success
    Sep 21 21:02:59.482: INFO: Pod "pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb" satisfied condition "Succeeded or Failed"

    Sep 21 21:02:59.485: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:02:59.503: INFO: Waiting for pod pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb to disappear
    Sep 21 21:02:59.506: INFO: Pod pod-configmaps-883dad42-80ff-47e4-af47-29589e71b8fb no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:02:59.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7266" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":276,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.933 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":500,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:25.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4294" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":29,"skipped":506,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:03:25.711: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:03:25.814: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5c1ab6b9-1f82-4ad7-b7ab-77269751ff9c" in namespace "security-context-test-1347" to be "Succeeded or Failed"

    Sep 21 21:03:25.826: INFO: Pod "busybox-user-65534-5c1ab6b9-1f82-4ad7-b7ab-77269751ff9c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.941071ms
    Sep 21 21:03:27.833: INFO: Pod "busybox-user-65534-5c1ab6b9-1f82-4ad7-b7ab-77269751ff9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018922293s
    Sep 21 21:03:27.833: INFO: Pod "busybox-user-65534-5c1ab6b9-1f82-4ad7-b7ab-77269751ff9c" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:27.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-1347" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":506,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:03:27.912: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep 21 21:03:27.961: INFO: Waiting up to 5m0s for pod "pod-0208319f-a386-4cc3-bc30-ae5ea791826d" in namespace "emptydir-5106" to be "Succeeded or Failed"

    Sep 21 21:03:27.968: INFO: Pod "pod-0208319f-a386-4cc3-bc30-ae5ea791826d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.387438ms
    Sep 21 21:03:29.972: INFO: Pod "pod-0208319f-a386-4cc3-bc30-ae5ea791826d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009231596s
    STEP: Saw pod success
    Sep 21 21:03:29.972: INFO: Pod "pod-0208319f-a386-4cc3-bc30-ae5ea791826d" satisfied condition "Succeeded or Failed"

    Sep 21 21:03:29.975: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-0208319f-a386-4cc3-bc30-ae5ea791826d container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:03:30.000: INFO: Waiting for pod pod-0208319f-a386-4cc3-bc30-ae5ea791826d to disappear
    Sep 21 21:03:30.004: INFO: Pod pod-0208319f-a386-4cc3-bc30-ae5ea791826d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:30.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5106" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":535,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:03:30.030: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:03:32.080: INFO: Deleting pod "var-expansion-4e9ba5f2-bd93-4694-b1db-599ea6ad8090" in namespace "var-expansion-8457"
    Sep 21 21:03:32.087: INFO: Wait up to 5m0s for pod "var-expansion-4e9ba5f2-bd93-4694-b1db-599ea6ad8090" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:40.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8457" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":32,"skipped":540,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:44.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-149" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":542,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:57.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4501" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":34,"skipped":548,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:03:57.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9471" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":35,"skipped":556,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 344 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:04:19.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-1351" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":36,"skipped":608,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:04:20.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f" in namespace "projected-7545" to be "Succeeded or Failed"

    Sep 21 21:04:20.061: INFO: Pod "downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156539ms
    Sep 21 21:04:22.066: INFO: Pod "downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010617012s
    STEP: Saw pod success
    Sep 21 21:04:22.066: INFO: Pod "downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f" satisfied condition "Succeeded or Failed"

    Sep 21 21:04:22.069: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:04:22.093: INFO: Waiting for pod downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f to disappear
    Sep 21 21:04:22.097: INFO: Pod downwardapi-volume-ea2f5051-aad1-4452-b73c-66614d08ba5f no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:04:22.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7545" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":615,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-528615b4-5d83-498f-a30a-0768f456a3f2
    STEP: Creating secret with name secret-projected-all-test-volume-b82c21c5-bd31-4f44-afda-4478724ac578
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep 21 21:04:22.193: INFO: Waiting up to 5m0s for pod "projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa" in namespace "projected-6874" to be "Succeeded or Failed"

    Sep 21 21:04:22.198: INFO: Pod "projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602301ms
    Sep 21 21:04:24.204: INFO: Pod "projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010975473s
    STEP: Saw pod success
    Sep 21 21:04:24.204: INFO: Pod "projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa" satisfied condition "Succeeded or Failed"

    Sep 21 21:04:24.209: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:04:24.232: INFO: Waiting for pod projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa to disappear
    Sep 21 21:04:24.236: INFO: Pod projected-volume-8b8a9a7d-5875-4317-83a4-49a23b35d5aa no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:04:24.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6874" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":623,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:04:24.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-9235" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":39,"skipped":626,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:04:25.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-9881" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":40,"skipped":632,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 122 lines ...
    Sep 21 21:00:47.875: INFO: ss-0  k8s-upgrade-and-conformance-kcibnj-worker-f3twbs  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:59:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 21:00:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-21 21:00:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-21 20:59:47 +0000 UTC  }]
    Sep 21 21:00:47.875: INFO: 
    Sep 21 21:00:47.875: INFO: StatefulSet ss has not reached scale 0, at 1
    STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7169
    Sep 21 21:00:48.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:49.030: INFO: rc: 1
    Sep 21 21:00:49.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("webserver")

    
    error:

    exit status 1
    Sep 21 21:00:59.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:00:59.142: INFO: rc: 1
    Sep 21 21:00:59.142: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:09.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:09.248: INFO: rc: 1
    Sep 21 21:01:09.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:19.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:19.349: INFO: rc: 1
    Sep 21 21:01:19.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:29.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:29.500: INFO: rc: 1
    Sep 21 21:01:29.500: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:39.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:39.633: INFO: rc: 1
    Sep 21 21:01:39.633: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:49.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:49.741: INFO: rc: 1
    Sep 21 21:01:49.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:01:59.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:01:59.847: INFO: rc: 1
    Sep 21 21:01:59.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:02:09.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:02:09.951: INFO: rc: 1
    Sep 21 21:02:09.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:02:19.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:02:20.052: INFO: rc: 1
    Sep 21 21:02:20.052: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:02:30.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:02:30.152: INFO: rc: 1
    Sep 21 21:02:30.152: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:02:40.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:02:40.281: INFO: rc: 1
    Sep 21 21:02:40.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:02:50.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:02:50.697: INFO: rc: 1
    Sep 21 21:02:50.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:00.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:00.844: INFO: rc: 1
    Sep 21 21:03:00.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:10.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:10.951: INFO: rc: 1
    Sep 21 21:03:10.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:20.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:21.069: INFO: rc: 1
    Sep 21 21:03:21.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:31.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:31.179: INFO: rc: 1
    Sep 21 21:03:31.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:41.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:41.282: INFO: rc: 1
    Sep 21 21:03:41.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:03:51.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:03:51.416: INFO: rc: 1
    Sep 21 21:03:51.417: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:01.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:01.521: INFO: rc: 1
    Sep 21 21:04:01.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:11.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:11.642: INFO: rc: 1
    Sep 21 21:04:11.643: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:21.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:21.741: INFO: rc: 1
    Sep 21 21:04:21.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:31.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:31.844: INFO: rc: 1
    Sep 21 21:04:31.845: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:41.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:41.942: INFO: rc: 1
    Sep 21 21:04:41.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:04:51.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:04:52.059: INFO: rc: 1
    Sep 21 21:04:52.059: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:02.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:02.187: INFO: rc: 1
    Sep 21 21:05:02.187: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:12.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:12.299: INFO: rc: 1
    Sep 21 21:05:12.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:22.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:22.416: INFO: rc: 1
    Sep 21 21:05:22.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:32.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:32.541: INFO: rc: 1
    Sep 21 21:05:32.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:42.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:42.643: INFO: rc: 1
    Sep 21 21:05:42.643: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 21 21:05:52.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7169 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 21 21:05:52.762: INFO: rc: 1
    Sep 21 21:05:52.762: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
    Sep 21 21:05:52.762: INFO: Scaling statefulset ss to 0
    Sep 21 21:05:52.788: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":22,"skipped":357,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:05:55.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":414,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:05:55.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d" in namespace "downward-api-8222" to be "Succeeded or Failed"

    Sep 21 21:05:55.129: INFO: Pod "downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009504ms
    Sep 21 21:05:57.134: INFO: Pod "downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009062128s
    STEP: Saw pod success
    Sep 21 21:05:57.134: INFO: Pod "downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d" satisfied condition "Succeeded or Failed"

    Sep 21 21:05:57.137: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:05:57.161: INFO: Waiting for pod downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d to disappear
    Sep 21 21:05:57.166: INFO: Pod downwardapi-volume-0a329d31-82fd-4037-bbb8-fb8426454d8d no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:05:57.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8222" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":415,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:05:57.192: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 21 21:05:57.245: INFO: Waiting up to 5m0s for pod "pod-c9c01ded-4058-48dc-8edf-b178aa1bf533" in namespace "emptydir-9401" to be "Succeeded or Failed"

    Sep 21 21:05:57.251: INFO: Pod "pod-c9c01ded-4058-48dc-8edf-b178aa1bf533": Phase="Pending", Reason="", readiness=false. Elapsed: 5.336949ms
    Sep 21 21:05:59.257: INFO: Pod "pod-c9c01ded-4058-48dc-8edf-b178aa1bf533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010695483s
    STEP: Saw pod success
    Sep 21 21:05:59.257: INFO: Pod "pod-c9c01ded-4058-48dc-8edf-b178aa1bf533" satisfied condition "Succeeded or Failed"

    Sep 21 21:05:59.260: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-c9c01ded-4058-48dc-8edf-b178aa1bf533 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:05:59.277: INFO: Waiting for pod pod-c9c01ded-4058-48dc-8edf-b178aa1bf533 to disappear
    Sep 21 21:05:59.281: INFO: Pod pod-c9c01ded-4058-48dc-8edf-b178aa1bf533 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:05:59.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9401" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":420,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:05.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-4064" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":26,"skipped":486,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3135" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":27,"skipped":507,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7438" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":28,"skipped":546,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:06:43.789: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-b5bd593f-7e48-42a5-bf5b-6a40c7cdeff8
    STEP: Creating a pod to test consume secrets
    Sep 21 21:06:43.839: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146" in namespace "projected-4262" to be "Succeeded or Failed"

    Sep 21 21:06:43.843: INFO: Pod "pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256928ms
    Sep 21 21:06:45.850: INFO: Pod "pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011306425s
    STEP: Saw pod success
    Sep 21 21:06:45.850: INFO: Pod "pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146" satisfied condition "Succeeded or Failed"

    Sep 21 21:06:45.854: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:06:45.881: INFO: Waiting for pod pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146 to disappear
    Sep 21 21:06:45.884: INFO: Pod pod-projected-secrets-c6a8c436-eb4c-4367-b378-02faf53e5146 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:45.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4262" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":549,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:06:45.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072" in namespace "projected-2611" to be "Succeeded or Failed"

    Sep 21 21:06:45.951: INFO: Pod "downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278492ms
    Sep 21 21:06:47.959: INFO: Pod "downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012446959s
    STEP: Saw pod success
    Sep 21 21:06:47.959: INFO: Pod "downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072" satisfied condition "Succeeded or Failed"

    Sep 21 21:06:47.965: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:06:47.986: INFO: Waiting for pod downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072 to disappear
    Sep 21 21:06:47.990: INFO: Pod downwardapi-volume-48f10dcd-7396-452a-a274-b6530a3eb072 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:47.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2611" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":551,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:06:48.013: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-32b0e2f5-5266-4a37-8ad2-3fa86ee40b50
    STEP: Creating a pod to test consume secrets
    Sep 21 21:06:48.067: INFO: Waiting up to 5m0s for pod "pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37" in namespace "secrets-1610" to be "Succeeded or Failed"

    Sep 21 21:06:48.071: INFO: Pod "pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.592706ms
    Sep 21 21:06:50.075: INFO: Pod "pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007544565s
    STEP: Saw pod success
    Sep 21 21:06:50.075: INFO: Pod "pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37" satisfied condition "Succeeded or Failed"

    Sep 21 21:06:50.078: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:06:50.095: INFO: Waiting for pod pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37 to disappear
    Sep 21 21:06:50.099: INFO: Pod pod-secrets-a1481f3f-9ddb-49d6-a59d-0ddab440aa37 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:50.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1610" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":556,"failed":0}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:06:50.112: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-2459-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":32,"skipped":556,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:06:53.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b" in namespace "projected-9315" to be "Succeeded or Failed"

    Sep 21 21:06:53.841: INFO: Pod "downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31544ms
    Sep 21 21:06:55.849: INFO: Pod "downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011796852s
    STEP: Saw pod success
    Sep 21 21:06:55.849: INFO: Pod "downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b" satisfied condition "Succeeded or Failed"

    Sep 21 21:06:55.852: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:06:55.868: INFO: Waiting for pod downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b to disappear
    Sep 21 21:06:55.871: INFO: Pod downwardapi-volume-4d1603e5-505d-40eb-b34f-c7d7b2532e3b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:55.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9315" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":571,"failed":0}

    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:06:55.881: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 21 21:06:55.922: INFO: Waiting up to 5m0s for pod "pod-2981e84b-bdab-4200-a228-52c47b9e19ac" in namespace "emptydir-8443" to be "Succeeded or Failed"

    Sep 21 21:06:55.927: INFO: Pod "pod-2981e84b-bdab-4200-a228-52c47b9e19ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391341ms
    Sep 21 21:06:57.932: INFO: Pod "pod-2981e84b-bdab-4200-a228-52c47b9e19ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008601806s
    STEP: Saw pod success
    Sep 21 21:06:57.932: INFO: Pod "pod-2981e84b-bdab-4200-a228-52c47b9e19ac" satisfied condition "Succeeded or Failed"

    Sep 21 21:06:57.935: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-2981e84b-bdab-4200-a228-52c47b9e19ac container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:06:57.950: INFO: Waiting for pod pod-2981e84b-bdab-4200-a228-52c47b9e19ac to disappear
    Sep 21 21:06:57.953: INFO: Pod pod-2981e84b-bdab-4200-a228-52c47b9e19ac no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:06:57.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8443" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":571,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:06:58.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e" in namespace "projected-4781" to be "Succeeded or Failed"

    Sep 21 21:06:58.040: INFO: Pod "downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069341ms
    Sep 21 21:07:00.045: INFO: Pod "downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008565517s
    STEP: Saw pod success
    Sep 21 21:07:00.045: INFO: Pod "downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e" satisfied condition "Succeeded or Failed"

    Sep 21 21:07:00.051: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:07:00.069: INFO: Waiting for pod downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e to disappear
    Sep 21 21:07:00.072: INFO: Pod downwardapi-volume-1b253d5b-1d31-424a-b48b-63d71947a99e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:07:00.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4781" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":594,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 70 lines ...
    STEP: Destroying namespace "services-1917" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":627,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "crd-webhook-8640" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":37,"skipped":630,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:07:31.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9319" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":633,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:07:31.112: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 21 21:07:31.144: INFO: PodSpec: initContainers in spec.initContainers
    Sep 21 21:08:18.343: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ca84895c-d01c-4352-8486-b51cea8c33bc", GenerateName:"", Namespace:"init-container-9329", SelfLink:"", UID:"8ebe9831-b665-40e8-9bba-7b0b9a427980", ResourceVersion:"10288", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63799391251, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"144014688"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00257ff08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00257ff20)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00257ff38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00257ff50)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-c5nc4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc008059d40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-c5nc4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-c5nc4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-c5nc4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003738f50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003420c40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003739030)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003739050)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003739058), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00373905c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00443eec0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799391251, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799391251, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799391251, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63799391251, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"192.168.0.35", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.0.35"}}, StartTime:(*v1.Time)(0xc00257ff80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003420d20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003420d90)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://e0f881360d8db5454bf9d599cc8880ca44d2395a1ee67abc5b9560a23ad422ee", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc008059dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc008059da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00373910f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-9329" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":39,"skipped":638,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:28.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":40,"skipped":693,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:28.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-7570" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":41,"skipped":733,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:08:28.853: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-df33d1fb-284c-481b-88b6-d7979b29da37
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:08:28.902: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509" in namespace "projected-9422" to be "Succeeded or Failed"

    Sep 21 21:08:28.905: INFO: Pod "pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701064ms
    Sep 21 21:08:30.909: INFO: Pod "pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006960941s
    STEP: Saw pod success
    Sep 21 21:08:30.909: INFO: Pod "pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509" satisfied condition "Succeeded or Failed"

    Sep 21 21:08:30.913: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:08:30.947: INFO: Waiting for pod pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509 to disappear
    Sep 21 21:08:30.951: INFO: Pod pod-projected-configmaps-42997330-fd00-46dc-9e80-bcabae17e509 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:30.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9422" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":735,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:37.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-393" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":744,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:41.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1761" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":759,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-3221-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":45,"skipped":760,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:08:45.458: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 21 21:08:45.534: INFO: Waiting up to 5m0s for pod "downward-api-a06b9479-affb-46a6-a471-462381f34c91" in namespace "downward-api-871" to be "Succeeded or Failed"

    Sep 21 21:08:45.539: INFO: Pod "downward-api-a06b9479-affb-46a6-a471-462381f34c91": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948094ms
    Sep 21 21:08:47.544: INFO: Pod "downward-api-a06b9479-affb-46a6-a471-462381f34c91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008771243s
    STEP: Saw pod success
    Sep 21 21:08:47.544: INFO: Pod "downward-api-a06b9479-affb-46a6-a471-462381f34c91" satisfied condition "Succeeded or Failed"

    Sep 21 21:08:47.547: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downward-api-a06b9479-affb-46a6-a471-462381f34c91 container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:08:47.579: INFO: Waiting for pod downward-api-a06b9479-affb-46a6-a471-462381f34c91 to disappear
    Sep 21 21:08:47.582: INFO: Pod downward-api-a06b9479-affb-46a6-a471-462381f34c91 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:08:47.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":764,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:09.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-2553" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":769,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:13.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-516" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":48,"skipped":785,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 58 lines ...
    Sep 21 21:02:00.498: INFO: stderr: ""
    Sep 21 21:02:00.499: INFO: stdout: "true"
    Sep 21 21:02:00.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-418 get pods update-demo-nautilus-x9s55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 21 21:02:00.604: INFO: stderr: ""
    Sep 21 21:02:00.604: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 21 21:02:00.604: INFO: validating pod update-demo-nautilus-x9s55
    Sep 21 21:05:35.159: INFO: update-demo-nautilus-x9s55 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-x9s55)

    Sep 21 21:05:40.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-418 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 21 21:05:40.256: INFO: stderr: ""
    Sep 21 21:05:40.256: INFO: stdout: "update-demo-nautilus-vddnd update-demo-nautilus-x9s55 "
    Sep 21 21:05:40.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-418 get pods update-demo-nautilus-vddnd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 21 21:05:40.352: INFO: stderr: ""
    Sep 21 21:05:40.352: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 21 21:05:40.556: INFO: stderr: ""
    Sep 21 21:05:40.556: INFO: stdout: "true"
    Sep 21 21:05:40.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-418 get pods update-demo-nautilus-x9s55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 21 21:05:40.650: INFO: stderr: ""
    Sep 21 21:05:40.650: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 21 21:05:40.650: INFO: validating pod update-demo-nautilus-x9s55
    Sep 21 21:09:14.299: INFO: update-demo-nautilus-x9s55 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-x9s55)

    Sep 21 21:09:19.302: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0010f2c00)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 28 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 21 21:09:19.302: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":32,"skipped":763,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:09:19.695: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 119 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7673" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":33,"skipped":763,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:09:35.461: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
    STEP: Destroying namespace "services-3053" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":34,"skipped":763,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:09:35.554: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 21 21:09:35.601: INFO: Waiting up to 5m0s for pod "downward-api-390c0da4-7753-469c-850e-ec7261f0c33b" in namespace "downward-api-14" to be "Succeeded or Failed"

    Sep 21 21:09:35.603: INFO: Pod "downward-api-390c0da4-7753-469c-850e-ec7261f0c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672176ms
    Sep 21 21:09:37.608: INFO: Pod "downward-api-390c0da4-7753-469c-850e-ec7261f0c33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007672045s
    STEP: Saw pod success
    Sep 21 21:09:37.609: INFO: Pod "downward-api-390c0da4-7753-469c-850e-ec7261f0c33b" satisfied condition "Succeeded or Failed"

    Sep 21 21:09:37.614: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod downward-api-390c0da4-7753-469c-850e-ec7261f0c33b container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:09:37.651: INFO: Waiting for pod downward-api-390c0da4-7753-469c-850e-ec7261f0c33b to disappear
    Sep 21 21:09:37.658: INFO: Pod downward-api-390c0da4-7753-469c-850e-ec7261f0c33b no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:37.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-14" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":767,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:09:37.686: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 21 21:09:37.734: INFO: Waiting up to 5m0s for pod "downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f" in namespace "downward-api-4403" to be "Succeeded or Failed"

    Sep 21 21:09:37.739: INFO: Pod "downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220407ms
    Sep 21 21:09:39.744: INFO: Pod "downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009811751s
    STEP: Saw pod success
    Sep 21 21:09:39.744: INFO: Pod "downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f" satisfied condition "Succeeded or Failed"

    Sep 21 21:09:39.748: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:09:39.769: INFO: Waiting for pod downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f to disappear
    Sep 21 21:09:39.772: INFO: Pod downward-api-f139ff7a-9ac1-4c0f-99c9-9c565373a72f no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:39.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4403" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":777,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:40.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-6715" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":784,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:09:47.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-6424" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":38,"skipped":787,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 21 21:10:05.311: INFO: File wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:05.318: INFO: File jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:05.318: INFO: Lookups using dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 failed for: [wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local]

    
    Sep 21 21:10:10.324: INFO: File wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:10.328: INFO: File jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:10.328: INFO: Lookups using dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 failed for: [wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local]

    
    Sep 21 21:10:15.323: INFO: File wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:15.327: INFO: File jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 21 21:10:15.327: INFO: Lookups using dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 failed for: [wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local]

    
    Sep 21 21:10:20.328: INFO: File jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local from pod  dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 contains '' instead of 'bar.example.com.'
    Sep 21 21:10:20.329: INFO: Lookups using dns-1632/dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 failed for: [jessie_udp@dns-test-service-3.dns-1632.svc.cluster.local]

    
    Sep 21 21:10:25.327: INFO: DNS probes using dns-test-1c1b9536-2c23-4257-abe9-5dfa2c8c4948 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1632.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1632.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:10:29.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1632" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":39,"skipped":796,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:10:29.512: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-4af91a48-31f1-4de7-a9e4-e80e08391f26
    STEP: Creating a pod to test consume secrets
    Sep 21 21:10:29.638: INFO: Waiting up to 5m0s for pod "pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5" in namespace "secrets-4633" to be "Succeeded or Failed"

    Sep 21 21:10:29.641: INFO: Pod "pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.053725ms
    Sep 21 21:10:31.646: INFO: Pod "pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008038572s
    STEP: Saw pod success
    Sep 21 21:10:31.646: INFO: Pod "pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5" satisfied condition "Succeeded or Failed"

    Sep 21 21:10:31.649: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:10:31.673: INFO: Waiting for pod pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5 to disappear
    Sep 21 21:10:31.676: INFO: Pod pod-secrets-c36a498e-e173-4206-a3fa-37090ea410e5 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:10:31.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4633" for this suite.
    STEP: Destroying namespace "secret-namespace-3438" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":796,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:10:31.695: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep 21 21:10:31.742: INFO: Waiting up to 5m0s for pod "var-expansion-84422908-1009-4912-ae81-7e9afb3283be" in namespace "var-expansion-9078" to be "Succeeded or Failed"

    Sep 21 21:10:31.746: INFO: Pod "var-expansion-84422908-1009-4912-ae81-7e9afb3283be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361582ms
    Sep 21 21:10:33.750: INFO: Pod "var-expansion-84422908-1009-4912-ae81-7e9afb3283be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007348907s
    STEP: Saw pod success
    Sep 21 21:10:33.750: INFO: Pod "var-expansion-84422908-1009-4912-ae81-7e9afb3283be" satisfied condition "Succeeded or Failed"

    Sep 21 21:10:33.752: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod var-expansion-84422908-1009-4912-ae81-7e9afb3283be container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:10:33.767: INFO: Waiting for pod var-expansion-84422908-1009-4912-ae81-7e9afb3283be to disappear
    Sep 21 21:10:33.771: INFO: Pod var-expansion-84422908-1009-4912-ae81-7e9afb3283be no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:10:33.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-9078" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":797,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:10:33.800: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-a6dd1344-bd8b-4be9-bceb-f37f93520076
    STEP: Creating a pod to test consume secrets
    Sep 21 21:10:33.847: INFO: Waiting up to 5m0s for pod "pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c" in namespace "secrets-5351" to be "Succeeded or Failed"

    Sep 21 21:10:33.853: INFO: Pod "pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.444026ms
    Sep 21 21:10:35.858: INFO: Pod "pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010874748s
    STEP: Saw pod success
    Sep 21 21:10:35.858: INFO: Pod "pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c" satisfied condition "Succeeded or Failed"

    Sep 21 21:10:35.862: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:10:35.878: INFO: Waiting for pod pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c to disappear
    Sep 21 21:10:35.880: INFO: Pod pod-secrets-110c092e-7b40-4742-895d-a192c870ba6c no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:10:35.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5351" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":808,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:10:45.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9872" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":43,"skipped":813,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
    STEP: Destroying namespace "services-1669" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":44,"skipped":818,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
    STEP: Destroying namespace "services-6471" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":45,"skipped":833,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:11:22.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7255" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":46,"skipped":843,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:11:50.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4364" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":47,"skipped":861,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:11:50.662: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 21 21:11:50.755: INFO: Waiting up to 5m0s for pod "pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5" in namespace "emptydir-4216" to be "Succeeded or Failed"

    Sep 21 21:11:50.765: INFO: Pod "pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038221ms
    Sep 21 21:11:52.775: INFO: Pod "pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020046202s
    STEP: Saw pod success
    Sep 21 21:11:52.775: INFO: Pod "pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5" satisfied condition "Succeeded or Failed"

    Sep 21 21:11:52.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:11:52.814: INFO: Waiting for pod pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5 to disappear
    Sep 21 21:11:52.829: INFO: Pod pod-cc5e0207-c4ca-45d3-ba3d-9cab015158d5 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:11:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4216" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":863,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-1273-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":49,"skipped":865,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:12:07.295: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:12:07.466: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0fd296da-cd5b-4701-9a7e-31392cfe0997" in namespace "security-context-test-9201" to be "Succeeded or Failed"

    Sep 21 21:12:07.496: INFO: Pod "busybox-readonly-false-0fd296da-cd5b-4701-9a7e-31392cfe0997": Phase="Pending", Reason="", readiness=false. Elapsed: 30.005548ms
    Sep 21 21:12:09.505: INFO: Pod "busybox-readonly-false-0fd296da-cd5b-4701-9a7e-31392cfe0997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039258324s
    Sep 21 21:12:09.505: INFO: Pod "busybox-readonly-false-0fd296da-cd5b-4701-9a7e-31392cfe0997" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-9201" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":887,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:22.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6508" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":51,"skipped":891,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:12:22.972: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-a3eabdc4-f3a8-42a2-ac06-efee7132e767
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:23.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-722" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":52,"skipped":921,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:12:23.142: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 21 21:12:23.233: INFO: Waiting up to 5m0s for pod "pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54" in namespace "emptydir-7367" to be "Succeeded or Failed"

    Sep 21 21:12:23.239: INFO: Pod "pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54": Phase="Pending", Reason="", readiness=false. Elapsed: 5.901222ms
    Sep 21 21:12:25.248: INFO: Pod "pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015154238s
    Sep 21 21:12:27.258: INFO: Pod "pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024676184s
    STEP: Saw pod success
    Sep 21 21:12:27.258: INFO: Pod "pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54" satisfied condition "Succeeded or Failed"

    Sep 21 21:12:27.265: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:12:27.312: INFO: Waiting for pod pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54 to disappear
    Sep 21 21:12:27.319: INFO: Pod pod-5b39d8f0-fe4f-436a-8a41-e7efdcccbf54 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:27.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7367" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":942,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-8843-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":54,"skipped":966,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:34.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4761" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":55,"skipped":974,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:34.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-963" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":56,"skipped":982,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-1916
    STEP: Creating statefulset with conflicting port in namespace statefulset-1916
    STEP: Waiting until pod test-pod will start running in namespace statefulset-1916
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1916
    Sep 21 21:12:39.269: INFO: Observed stateful pod in namespace: statefulset-1916, name: ss-0, uid: a88ce9af-545b-4aac-9731-6d47fdbfcdb8, status phase: Pending. Waiting for statefulset controller to delete.
    Sep 21 21:12:39.449: INFO: Observed stateful pod in namespace: statefulset-1916, name: ss-0, uid: a88ce9af-545b-4aac-9731-6d47fdbfcdb8, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 21 21:12:39.464: INFO: Observed stateful pod in namespace: statefulset-1916, name: ss-0, uid: a88ce9af-545b-4aac-9731-6d47fdbfcdb8, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 21 21:12:39.468: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1916
    STEP: Removing pod with conflicting port in namespace statefulset-1916
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1916 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep 21 21:12:43.514: INFO: Deleting all statefulset in ns statefulset-1916
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:12:53.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":57,"skipped":1003,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:13:16.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6359" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1025,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:13:16.833: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-29e29e77-3cba-4958-ab27-73ad7975cb22
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:13:16.915: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4" in namespace "projected-3279" to be "Succeeded or Failed"

    Sep 21 21:13:16.923: INFO: Pod "pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.755576ms
    Sep 21 21:13:18.930: INFO: Pod "pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015130997s
    STEP: Saw pod success
    Sep 21 21:13:18.930: INFO: Pod "pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4" satisfied condition "Succeeded or Failed"

    Sep 21 21:13:18.937: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:13:18.967: INFO: Waiting for pod pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4 to disappear
    Sep 21 21:13:18.972: INFO: Pod pod-projected-configmaps-bca1e1a2-2f1c-4eb8-99f7-67b6d776c8c4 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:13:18.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3279" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1031,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:13:43.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-4367" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1036,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:13:46.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-3748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":61,"skipped":1049,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-972" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":62,"skipped":1058,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:13:51.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-8193" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":63,"skipped":1067,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:14:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-8380" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":64,"skipped":1082,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:14:01.925: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 21 21:14:02.009: INFO: Waiting up to 5m0s for pod "security-context-8aa65428-bfd1-4dee-b516-69cb5032519b" in namespace "security-context-8389" to be "Succeeded or Failed"

    Sep 21 21:14:02.017: INFO: Pod "security-context-8aa65428-bfd1-4dee-b516-69cb5032519b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.976093ms
    Sep 21 21:14:04.029: INFO: Pod "security-context-8aa65428-bfd1-4dee-b516-69cb5032519b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019395536s
    STEP: Saw pod success
    Sep 21 21:14:04.029: INFO: Pod "security-context-8aa65428-bfd1-4dee-b516-69cb5032519b" satisfied condition "Succeeded or Failed"

    Sep 21 21:14:04.039: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod security-context-8aa65428-bfd1-4dee-b516-69cb5032519b container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:14:04.077: INFO: Waiting for pod security-context-8aa65428-bfd1-4dee-b516-69cb5032519b to disappear
    Sep 21 21:14:04.084: INFO: Pod security-context-8aa65428-bfd1-4dee-b516-69cb5032519b no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:14:04.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-8389" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":65,"skipped":1103,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:14:04.124: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-8a4bcc16-e2da-4e4d-9aa1-6754c945116e
    STEP: Creating a pod to test consume secrets
    Sep 21 21:14:04.232: INFO: Waiting up to 5m0s for pod "pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0" in namespace "secrets-3153" to be "Succeeded or Failed"

    Sep 21 21:14:04.237: INFO: Pod "pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.160796ms
    Sep 21 21:14:06.245: INFO: Pod "pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013109727s
    STEP: Saw pod success
    Sep 21 21:14:06.247: INFO: Pod "pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0" satisfied condition "Succeeded or Failed"

    Sep 21 21:14:06.256: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:14:06.295: INFO: Waiting for pod pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0 to disappear
    Sep 21 21:14:06.303: INFO: Pod pod-secrets-1043e31a-ef19-4e6c-9055-d9cf66d5d2c0 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:14:06.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3153" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1105,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 59 lines ...
    STEP: Destroying namespace "services-5971" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":67,"skipped":1108,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 76 lines ...
    • [SLOW TEST:704.511 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":41,"skipped":667,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 77 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:16:42.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":68,"skipped":1112,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:16:42.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-545" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":69,"skipped":1116,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:16:42.568: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-4cef7bbd-96c5-4cca-8912-f01cc5ac879a
    STEP: Creating a pod to test consume secrets
    Sep 21 21:16:42.631: INFO: Waiting up to 5m0s for pod "pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6" in namespace "secrets-1886" to be "Succeeded or Failed"

    Sep 21 21:16:42.637: INFO: Pod "pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.655201ms
    Sep 21 21:16:44.644: INFO: Pod "pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012578588s
    STEP: Saw pod success
    Sep 21 21:16:44.644: INFO: Pod "pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6" satisfied condition "Succeeded or Failed"

    Sep 21 21:16:44.649: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:16:44.687: INFO: Waiting for pod pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6 to disappear
    Sep 21 21:16:44.690: INFO: Pod pod-secrets-9a9845e9-576b-4972-89b6-3b062e9b13a6 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:16:44.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1886" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1142,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:16:44.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90" in namespace "projected-8390" to be "Succeeded or Failed"

    Sep 21 21:16:44.774: INFO: Pod "downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636113ms
    Sep 21 21:16:46.781: INFO: Pod "downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010664496s
    STEP: Saw pod success
    Sep 21 21:16:46.781: INFO: Pod "downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90" satisfied condition "Succeeded or Failed"

    Sep 21 21:16:46.786: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:16:46.808: INFO: Waiting for pod downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90 to disappear
    Sep 21 21:16:46.812: INFO: Pod downwardapi-volume-208e46c4-ab1b-43e1-af05-1e85fa54fc90 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:16:46.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8390" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1146,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":42,"skipped":679,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:16:30.136: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:16:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-7768" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":679,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep 21 21:16:49.091: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.098: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.104: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.124: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:49.160: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:16:54.166: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.184: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
... skipping 5 lines ...
    Sep 21 21:16:54.240: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.249: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.253: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.258: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:54.289: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:16:59.167: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
... skipping 5 lines ...
    Sep 21 21:16:59.254: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.260: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.267: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.273: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.286: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:16:59.324: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:17:04.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.174: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.179: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
... skipping 5 lines ...
    Sep 21 21:17:04.263: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.272: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.284: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.290: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:04.323: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:17:09.168: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.177: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
... skipping 5 lines ...
    Sep 21 21:17:09.249: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.254: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.258: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.263: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.268: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:09.302: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:17:14.166: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
... skipping 5 lines ...
    Sep 21 21:17:14.231: INFO: Unable to read jessie_udp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.235: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413 from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.238: INFO: Unable to read jessie_udp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.246: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.249: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc from pod dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002: the server could not find the requested resource (get pods dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002)
    Sep 21 21:17:14.276: INFO: Lookups using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2413 wheezy_tcp@dns-test-service.dns-2413 wheezy_udp@dns-test-service.dns-2413.svc wheezy_tcp@dns-test-service.dns-2413.svc wheezy_udp@_http._tcp.dns-test-service.dns-2413.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2413.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2413 jessie_tcp@dns-test-service.dns-2413 jessie_udp@dns-test-service.dns-2413.svc jessie_tcp@dns-test-service.dns-2413.svc jessie_udp@_http._tcp.dns-test-service.dns-2413.svc jessie_tcp@_http._tcp.dns-test-service.dns-2413.svc]

    
    Sep 21 21:17:19.280: INFO: DNS probes using dns-2413/dns-test-cf64e671-cd2d-43e7-bdc8-7caf32bca002 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:19.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-2413" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":72,"skipped":1159,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:19.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-9767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":73,"skipped":1161,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:24.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-8112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":74,"skipped":1171,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:34.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6183" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":44,"skipped":708,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep 21 21:17:27.836: INFO: Creating new exec pod
    Sep 21 21:17:30.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4803 exec execpodjd88b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 21 21:17:31.099: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
    Sep 21 21:17:31.099: INFO: stdout: "externalname-service-bhx2v"
    Sep 21 21:17:31.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4803 exec execpodjd88b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.81.162 80'
    Sep 21 21:17:33.329: INFO: rc: 1
    Sep 21 21:17:33.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4803 exec execpodjd88b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.81.162 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 10.135.81.162 80
    + echo hostName
    nc: connect to 10.135.81.162 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 21 21:17:34.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4803 exec execpodjd88b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.81.162 80'
    Sep 21 21:17:34.558: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.81.162 80\nConnection to 10.135.81.162 80 port [tcp/http] succeeded!\n"
    Sep 21 21:17:34.558: INFO: stdout: "externalname-service-bhx2v"
    Sep 21 21:17:34.558: INFO: Cleaning up the ExternalName to ClusterIP test service
... skipping 3 lines ...
    STEP: Destroying namespace "services-4803" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":75,"skipped":1179,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:42.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6554" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":76,"skipped":1193,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-1112
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1112 to expose endpoints map[]
    Sep 21 21:17:42.258: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep 21 21:17:43.270: INFO: successfully validated that service multi-endpoint-test in namespace services-1112 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-1112
    Sep 21 21:17:43.286: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 21 21:17:45.293: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1112 to expose endpoints map[pod1:[100]]
    Sep 21 21:17:45.314: INFO: successfully validated that service multi-endpoint-test in namespace services-1112 exposes endpoints map[pod1:[100]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-1112" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":77,"skipped":1196,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:51.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4107" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":45,"skipped":763,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:17:51.918: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-3b862b67-3afc-4337-8e75-077e7ceeab64
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:51.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-289" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":46,"skipped":791,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:52.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-2662" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":47,"skipped":797,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:17:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4170" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":48,"skipped":845,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:17:58.340: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:18:16.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2803" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":845,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:18:16.488: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-38aa4465-68fd-4863-8884-a01f7b1521fc
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:18:16.547: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679" in namespace "projected-3118" to be "Succeeded or Failed"

    Sep 21 21:18:16.554: INFO: Pod "pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679": Phase="Pending", Reason="", readiness=false. Elapsed: 5.615709ms
    Sep 21 21:18:18.558: INFO: Pod "pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009738529s
    STEP: Saw pod success
    Sep 21 21:18:18.558: INFO: Pod "pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679" satisfied condition "Succeeded or Failed"

    Sep 21 21:18:18.561: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679 container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:18:18.590: INFO: Waiting for pod pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679 to disappear
    Sep 21 21:18:18.594: INFO: Pod pod-projected-configmaps-9410b28d-35eb-428c-a466-5ef908b83679 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:18:18.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3118" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":855,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9061-crds.webhook.example.com via the AdmissionRegistration API
    Sep 21 21:18:01.842: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:18:11.955: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:18:22.057: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:18:32.154: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:18:42.169: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:18:42.169: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:18:42.169: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:18:48.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-587" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":51,"skipped":881,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:18:48.828: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename podtemplate
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:18:48.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-2301" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":52,"skipped":881,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:18:48.961: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:18:49.005: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0a393faf-fdcc-48f4-928b-93b0227dc931" in namespace "security-context-test-7381" to be "Succeeded or Failed"

    Sep 21 21:18:49.009: INFO: Pod "busybox-privileged-false-0a393faf-fdcc-48f4-928b-93b0227dc931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094446ms
    Sep 21 21:18:51.014: INFO: Pod "busybox-privileged-false-0a393faf-fdcc-48f4-928b-93b0227dc931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009031578s
    Sep 21 21:18:51.014: INFO: Pod "busybox-privileged-false-0a393faf-fdcc-48f4-928b-93b0227dc931" satisfied condition "Succeeded or Failed"

    Sep 21 21:18:51.024: INFO: Got logs for pod "busybox-privileged-false-0a393faf-fdcc-48f4-928b-93b0227dc931": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:18:51.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-7381" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":895,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:02.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":54,"skipped":903,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:04.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6071" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":55,"skipped":915,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:11.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6873" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":56,"skipped":931,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:11.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-4766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":57,"skipped":957,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-4707-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":58,"skipped":1006,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:19:15.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3" in namespace "downward-api-2684" to be "Succeeded or Failed"

    Sep 21 21:19:15.312: INFO: Pod "downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578694ms
    Sep 21 21:19:17.316: INFO: Pod "downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008999113s
    STEP: Saw pod success
    Sep 21 21:19:17.317: INFO: Pod "downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:17.321: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:17.345: INFO: Waiting for pod downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3 to disappear
    Sep 21 21:19:17.350: INFO: Pod downwardapi-volume-7be6a120-ce3a-47f7-984e-55292f5b4fb3 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:17.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2684" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1014,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:21.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9951" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1016,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:19:21.569: INFO: Waiting up to 5m0s for pod "downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7" in namespace "downward-api-6161" to be "Succeeded or Failed"

    Sep 21 21:19:21.573: INFO: Pod "downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129438ms
    Sep 21 21:19:23.578: INFO: Pod "downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007851833s
    STEP: Saw pod success
    Sep 21 21:19:23.578: INFO: Pod "downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:23.582: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:23.611: INFO: Waiting for pod downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7 to disappear
    Sep 21 21:19:23.618: INFO: Pod downwardapi-volume-550432e9-12da-482b-8e42-f40f79a20fd7 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:23.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6161" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1031,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:26.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9063" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":62,"skipped":1044,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:26.181: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:19:28.234: INFO: Deleting pod "var-expansion-361eec04-f1ed-4479-82e5-f5535f37f285" in namespace "var-expansion-2099"
    Sep 21 21:19:28.240: INFO: Wait up to 5m0s for pod "var-expansion-361eec04-f1ed-4479-82e5-f5535f37f285" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:36.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2099" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":63,"skipped":1057,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":77,"skipped":1212,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:18:42.760: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1908-crds.webhook.example.com via the AdmissionRegistration API
    Sep 21 21:18:56.940: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:19:07.053: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:19:17.156: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:19:27.259: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:19:37.272: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:19:37.272: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:19:37.272: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 5 lines ...
    Sep 21 21:19:36.338: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep 21 21:19:36.394: INFO: Waiting up to 5m0s for pod "client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330" in namespace "containers-3520" to be "Succeeded or Failed"

    Sep 21 21:19:36.397: INFO: Pod "client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330": Phase="Pending", Reason="", readiness=false. Elapsed: 3.198284ms
    Sep 21 21:19:38.403: INFO: Pod "client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008797471s
    STEP: Saw pod success
    Sep 21 21:19:38.403: INFO: Pod "client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:38.406: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:38.421: INFO: Waiting for pod client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330 to disappear
    Sep 21 21:19:38.423: INFO: Pod client-containers-137b1cd4-c6a2-4568-9d7e-bf7afcad2330 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:38.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3520" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1094,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:40.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-5952" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":65,"skipped":1126,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:19:40.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03" in namespace "projected-8132" to be "Succeeded or Failed"

    Sep 21 21:19:40.669: INFO: Pod "downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374047ms
    Sep 21 21:19:42.674: INFO: Pod "downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008536904s
    STEP: Saw pod success
    Sep 21 21:19:42.674: INFO: Pod "downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:42.678: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:42.695: INFO: Waiting for pod downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03 to disappear
    Sep 21 21:19:42.700: INFO: Pod downwardapi-volume-ba5c4c0d-5fc0-4d92-b5d9-94c78539de03 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:42.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8132" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1157,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:19:42.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660" in namespace "projected-607" to be "Succeeded or Failed"

    Sep 21 21:19:42.813: INFO: Pod "downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660": Phase="Pending", Reason="", readiness=false. Elapsed: 10.451673ms
    Sep 21 21:19:44.818: INFO: Pod "downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015828471s
    STEP: Saw pod success
    Sep 21 21:19:44.818: INFO: Pod "downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:44.822: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:44.843: INFO: Waiting for pod downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660 to disappear
    Sep 21 21:19:44.846: INFO: Pod downwardapi-volume-3a43b775-2d7d-423b-9a98-4c1bd6c01660 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:44.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1175,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:44.929: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 21 21:19:44.976: INFO: Waiting up to 5m0s for pod "pod-4380f220-007d-4006-b025-7f7fd8eb2837" in namespace "emptydir-4535" to be "Succeeded or Failed"

    Sep 21 21:19:44.981: INFO: Pod "pod-4380f220-007d-4006-b025-7f7fd8eb2837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.805294ms
    Sep 21 21:19:46.987: INFO: Pod "pod-4380f220-007d-4006-b025-7f7fd8eb2837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010041229s
    STEP: Saw pod success
    Sep 21 21:19:46.987: INFO: Pod "pod-4380f220-007d-4006-b025-7f7fd8eb2837" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:46.990: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-4380f220-007d-4006-b025-7f7fd8eb2837 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:47.005: INFO: Waiting for pod pod-4380f220-007d-4006-b025-7f7fd8eb2837 to disappear
    Sep 21 21:19:47.008: INFO: Pod pod-4380f220-007d-4006-b025-7f7fd8eb2837 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:47.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4535" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1207,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:47.062: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep 21 21:19:47.112: INFO: Waiting up to 5m0s for pod "var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e" in namespace "var-expansion-297" to be "Succeeded or Failed"

    Sep 21 21:19:47.116: INFO: Pod "var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.968135ms
    Sep 21 21:19:49.121: INFO: Pod "var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008895568s
    STEP: Saw pod success
    Sep 21 21:19:49.121: INFO: Pod "var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:49.125: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:49.143: INFO: Waiting for pod var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e to disappear
    Sep 21 21:19:49.146: INFO: Pod var-expansion-c3522513-43f0-47c7-ac26-2fa0a129b86e no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:49.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-297" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1227,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:49.168: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-6b74db9e-20f2-408c-a3f2-6f36def7d9fe
    STEP: Creating a pod to test consume secrets
    Sep 21 21:19:49.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791" in namespace "projected-5936" to be "Succeeded or Failed"

    Sep 21 21:19:49.220: INFO: Pod "pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.881646ms
    Sep 21 21:19:51.226: INFO: Pod "pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011228701s
    STEP: Saw pod success
    Sep 21 21:19:51.226: INFO: Pod "pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:51.231: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:19:51.254: INFO: Waiting for pod pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791 to disappear
    Sep 21 21:19:51.259: INFO: Pod pod-projected-secrets-9811abd8-fdc1-49f3-a64c-572653e9b791 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:51.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5936" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1233,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:53.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7565" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1253,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:53.969: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 21 21:19:54.025: INFO: Waiting up to 5m0s for pod "security-context-727c81a3-7af7-4042-897d-a65f155287ec" in namespace "security-context-3179" to be "Succeeded or Failed"

    Sep 21 21:19:54.029: INFO: Pod "security-context-727c81a3-7af7-4042-897d-a65f155287ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.636928ms
    Sep 21 21:19:56.034: INFO: Pod "security-context-727c81a3-7af7-4042-897d-a65f155287ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008808125s
    STEP: Saw pod success
    Sep 21 21:19:56.034: INFO: Pod "security-context-727c81a3-7af7-4042-897d-a65f155287ec" satisfied condition "Succeeded or Failed"

    Sep 21 21:19:56.037: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod security-context-727c81a3-7af7-4042-897d-a65f155287ec container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:19:56.053: INFO: Waiting for pod security-context-727c81a3-7af7-4042-897d-a65f155287ec to disappear
    Sep 21 21:19:56.057: INFO: Pod security-context-727c81a3-7af7-4042-897d-a65f155287ec no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:56.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-3179" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":72,"skipped":1284,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:56.071: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:19:56.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8035" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":73,"skipped":1284,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":77,"skipped":1212,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:19:37.880: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1246-crds.webhook.example.com via the AdmissionRegistration API
    Sep 21 21:19:51.841: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:20:01.954: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:20:12.058: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:20:22.154: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:20:32.165: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:20:32.166: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with different stored version [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:20:32.166: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":77,"skipped":1212,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:20:32.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355" in namespace "downward-api-5156" to be "Succeeded or Failed"

    Sep 21 21:20:32.988: INFO: Pod "downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81303ms
    Sep 21 21:20:34.994: INFO: Pod "downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014680576s
    STEP: Saw pod success
    Sep 21 21:20:34.994: INFO: Pod "downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355" satisfied condition "Succeeded or Failed"

    Sep 21 21:20:34.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355 container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:20:35.032: INFO: Waiting for pod downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355 to disappear
    Sep 21 21:20:35.039: INFO: Pod downwardapi-volume-91a4f89b-d32b-4243-b397-4e7263217355 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:35.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1254,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:20:35.060: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename disruption
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:39.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-9612" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":79,"skipped":1254,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:46.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-4028" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":80,"skipped":1292,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 271 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  30s   default-scheduler  Successfully assigned pod-network-test-8743/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep 21 21:09:43.876: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.1.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 21 21:09:43.876: INFO: ...failed...will try again in next pass

    Sep 21 21:09:43.876: INFO: Breadth first check of 192.168.0.41 on host 172.18.0.4...
    Sep 21 21:09:43.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.0.41&port=8080&tries=1'] Namespace:pod-network-test-8743 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 21 21:09:43.882: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 21 21:09:49.002: INFO: Waiting for responses: map[netserver-1:{}]
    Sep 21 21:09:51.003: INFO: 
    Output of kubectl describe pod pod-network-test-8743/netserver-0:
... skipping 240 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  38s   default-scheduler  Successfully assigned pod-network-test-8743/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     37s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    37s   kubelet            Created container webserver
      Normal  Started    37s   kubelet            Started container webserver
    
    Sep 21 21:09:51.642: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.0.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 21 21:09:51.642: INFO: ...failed...will try again in next pass

    Sep 21 21:09:51.642: INFO: Breadth first check of 192.168.6.54 on host 172.18.0.5...
    Sep 21 21:09:51.646: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.6.54&port=8080&tries=1'] Namespace:pod-network-test-8743 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 21 21:09:51.646: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 21 21:09:51.764: INFO: Waiting for responses: map[]
    Sep 21 21:09:51.764: INFO: reached 192.168.6.54 after 0/1 tries
    Sep 21 21:09:51.764: INFO: Breadth first check of 192.168.2.53 on host 172.18.0.6...
... skipping 387 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  6m8s  default-scheduler  Successfully assigned pod-network-test-8743/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     6m7s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    6m7s  kubelet            Created container webserver
      Normal  Started    6m7s  kubelet            Started container webserver
    
    Sep 21 21:15:21.898: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.1.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 21 21:15:21.898: INFO: ... Done probing pod [[[ 192.168.1.54 ]]]
    Sep 21 21:15:21.899: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  11m   default-scheduler  Successfully assigned pod-network-test-8743/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     11m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    11m   kubelet            Created container webserver
      Normal  Started    11m   kubelet            Started container webserver
    
    Sep 21 21:20:49.542: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.0.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 21 21:20:49.542: INFO: ... Done probing pod [[[ 192.168.0.41 ]]]
    Sep 21 21:20:49.542: INFO: succeeded at polling 2 out of 4 connections
    Sep 21 21:20:49.542: INFO: pod polling failure summary:
    Sep 21 21:20:49.542: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.1.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}]
    Sep 21 21:20:49.542: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.54:9080/dial?request=hostname&protocol=http&host=192.168.0.41&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}]
    Sep 21 21:20:49.542: FAIL: failed,  2 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0034f5800)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 21 21:20:49.542: failed,  2 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:56.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":74,"skipped":1314,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:56.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":75,"skipped":1322,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:56.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-897" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":81,"skipped":1293,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:20:59.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7050" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1305,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:20:59.062: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep 21 21:20:59.103: INFO: Waiting up to 5m0s for pod "test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f" in namespace "svcaccounts-3603" to be "Succeeded or Failed"

    Sep 21 21:20:59.106: INFO: Pod "test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.716532ms
    Sep 21 21:21:01.113: INFO: Pod "test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009344405s
    STEP: Saw pod success
    Sep 21 21:21:01.113: INFO: Pod "test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f" satisfied condition "Succeeded or Failed"

    Sep 21 21:21:01.116: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:21:01.141: INFO: Waiting for pod test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f to disappear
    Sep 21 21:21:01.144: INFO: Pod test-pod-507f7712-ebbb-4f88-ac21-006994f20f2f no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:21:01.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-3603" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":83,"skipped":1322,"failed":4,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:21:17.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6010" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":76,"skipped":1376,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 157 lines ...
    Sep 21 21:26:11.887: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:13.886: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:15.889: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:17.887: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:19.889: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:19.895: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false)
    Sep 21 21:26:19.895: FAIL: Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      when create a pod with lifecycle hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
        should execute poststart exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 21 21:26:19.895: Unexpected error:

            <*errors.errorString | 0xc000244290>: {
                s: "timed out waiting for the condition",
            }
            timed out waiting for the condition
        occurred
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1381,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:26:19.918: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:26:30.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2525" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1381,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:26:30.054: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-80f85a33-99f1-4df9-b2d2-e0df4313ace3
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:26:30.102: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a" in namespace "projected-1950" to be "Succeeded or Failed"

    Sep 21 21:26:30.106: INFO: Pod "pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980144ms
    Sep 21 21:26:32.112: INFO: Pod "pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009911899s
    STEP: Saw pod success
    Sep 21 21:26:32.112: INFO: Pod "pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a" satisfied condition "Succeeded or Failed"

    Sep 21 21:26:32.115: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:26:32.138: INFO: Waiting for pod pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a to disappear
    Sep 21 21:26:32.141: INFO: Pod pod-projected-configmaps-fddcf1b9-1029-4f4f-ab9a-d727691fc53a no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:26:32.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1950" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1381,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":787,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:20:49.568: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 284 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-6319/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     31s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    31s   kubelet            Created container webserver
      Normal  Started    31s   kubelet            Started container webserver
    
    Sep 21 21:21:21.633: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.85:9080/dial?request=hostname&protocol=http&host=192.168.2.71&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 21 21:21:21.633: INFO: ...failed...will try again in next pass

    Sep 21 21:21:21.633: INFO: Going to retry 1 out of 4 pods....
    Sep 21 21:21:21.633: INFO: Doublechecking 1 pods in host 172.18.0.6 which werent seen the first time.
    Sep 21 21:21:21.633: INFO: Now attempting to probe pod [[[ 192.168.2.71 ]]]
    Sep 21 21:21:21.637: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.85:9080/dial?request=hostname&protocol=http&host=192.168.2.71&port=8080&tries=1'] Namespace:pod-network-test-6319 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 21 21:21:21.637: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 21 21:21:26.739: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 377 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  6m     default-scheduler  Successfully assigned pod-network-test-6319/netserver-3 to k8s-upgrade-and-conformance-kcibnj-worker-f3twbs
      Normal  Pulled     5m59s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m59s  kubelet            Created container webserver
      Normal  Started    5m59s  kubelet            Started container webserver
    
    Sep 21 21:26:49.006: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.85:9080/dial?request=hostname&protocol=http&host=192.168.2.71&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep 21 21:26:49.006: INFO: ... Done probing pod [[[ 192.168.2.71 ]]]
    Sep 21 21:26:49.006: INFO: succeeded at polling 3 out of 4 connections
    Sep 21 21:26:49.006: INFO: pod polling failure summary:
    Sep 21 21:26:49.006: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.85:9080/dial?request=hostname&protocol=http&host=192.168.2.71&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep 21 21:26:49.006: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0034f5800)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 21 21:26:49.006: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":787,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:26:49.024: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:11.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-2103" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":787,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:27:11.714: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-c5a7bf83-a9cc-4c47-9daa-3d7ca0b7efb8
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:27:11.772: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937" in namespace "projected-84" to be "Succeeded or Failed"

    Sep 21 21:27:11.777: INFO: Pod "pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.901081ms
    Sep 21 21:27:13.783: INFO: Pod "pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011127559s
    STEP: Saw pod success
    Sep 21 21:27:13.783: INFO: Pod "pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937" satisfied condition "Succeeded or Failed"

    Sep 21 21:27:13.789: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:27:13.830: INFO: Waiting for pod pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937 to disappear
    Sep 21 21:27:13.839: INFO: Pod pod-projected-configmaps-db5468cf-7e76-43bf-bbdc-aa6c10acb937 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:13.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-84" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":797,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:18.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1980" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":51,"skipped":810,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:27:18.243: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep 21 21:27:20.321: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:20.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2803" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":829,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:22.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2287" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":854,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Creating a mutating webhook configuration
    Sep 21 21:26:45.753: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:26:55.865: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:27:05.972: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:27:16.068: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:27:26.086: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:27:26.086: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      patching/updating a mutating webhook should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:27:26.086: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:27.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9403" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":54,"skipped":972,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:27:32.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9338" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":55,"skipped":973,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "crd-webhook-9756" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":56,"skipped":991,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
    [It] should call prestop when killing a pod  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating server pod server in namespace prestop-5907
    STEP: Waiting for pods to come up.
    STEP: Creating tester pod tester in namespace prestop-5907
    STEP: Deleting pre-stop pod
    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    Sep 21 21:28:17.080: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:28:17.080: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":78,"skipped":1401,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:27:26.194: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Creating a mutating webhook configuration
    Sep 21 21:27:40.297: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:27:50.409: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:00.513: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:10.609: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:20.620: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:20.621: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      patching/updating a mutating webhook should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:28:20.621: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":57,"skipped":1011,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:27:52.632: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    Sep 21 21:27:54.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.789: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.833: INFO: Unable to read jessie_udp@dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.850: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.857: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:54.898: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@dns-test-service.dns-5635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@dns-test-service.dns-5635.svc.cluster.local jessie_tcp@dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:27:59.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:59.921: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:59.956: INFO: Unable to read jessie_udp@dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:59.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:59.971: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:27:59.994: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:28:04.917: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:04.927: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:04.976: INFO: Unable to read jessie_udp@dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:04.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:04.997: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:05.035: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:28:09.912: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:09.916: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:09.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:09.957: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:09.985: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:28:14.912: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:14.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:14.961: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:14.965: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:14.991: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:28:19.920: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:19.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:19.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:19.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local from pod dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d: the server could not find the requested resource (get pods dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d)
    Sep 21 21:28:20.026: INFO: Lookups using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5635.svc.cluster.local]

    
    Sep 21 21:28:24.985: INFO: DNS probes using dns-5635/dns-test-7aa6289d-d0e2-4661-ac5d-67af74bbef4d succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:28:25.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":58,"skipped":1011,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":78,"skipped":1401,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:28:20.728: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Creating a mutating webhook configuration
    Sep 21 21:28:34.527: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:44.640: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:28:54.744: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:29:04.840: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:29:14.858: INFO: Waiting for webhook configuration to be ready...
    Sep 21 21:29:14.859: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      patching/updating a mutating webhook should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:29:14.859: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":78,"skipped":1401,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:29:14.984: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 21 21:29:15.070: INFO: Waiting up to 5m0s for pod "downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de" in namespace "downward-api-5486" to be "Succeeded or Failed"

    Sep 21 21:29:15.087: INFO: Pod "downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de": Phase="Pending", Reason="", readiness=false. Elapsed: 16.453763ms
    Sep 21 21:29:17.092: INFO: Pod "downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021449005s
    STEP: Saw pod success
    Sep 21 21:29:17.092: INFO: Pod "downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de" satisfied condition "Succeeded or Failed"

    Sep 21 21:29:17.095: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:29:17.133: INFO: Waiting for pod downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de to disappear
    Sep 21 21:29:17.136: INFO: Pod downward-api-dd8fe4b7-91cf-4bd0-84c3-85de64bcb6de no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:17.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5486" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1401,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:17.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-515" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1032,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7957" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":60,"skipped":1052,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 70 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:39.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-9301" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1407,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:41.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1424,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:49.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3856" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":82,"skipped":1432,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:29:50.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-907" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":83,"skipped":1453,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":61,"skipped":1115,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:29:29.792: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:29.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-7496" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1115,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:32.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":84,"skipped":1468,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:30:30.052: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-65da6c52-a92e-4f67-b1fb-57cd054a8088
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:30:30.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69" in namespace "configmap-9793" to be "Succeeded or Failed"

    Sep 21 21:30:30.134: INFO: Pod "pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.835036ms
    Sep 21 21:30:32.140: INFO: Pod "pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69": Phase="Running", Reason="", readiness=true. Elapsed: 2.012831615s
    Sep 21 21:30:34.148: INFO: Pod "pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02047992s
    STEP: Saw pod success
    Sep 21 21:30:34.148: INFO: Pod "pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69" satisfied condition "Succeeded or Failed"

    Sep 21 21:30:34.154: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:30:34.213: INFO: Waiting for pod pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69 to disappear
    Sep 21 21:30:34.224: INFO: Pod pod-configmaps-2d4c81a1-31e3-43f0-a950-b60efd3f9d69 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:34.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9793" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1175,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-1667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":85,"skipped":1490,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:40.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4927" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":86,"skipped":1505,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:40.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-7838" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":87,"skipped":1514,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-2wz2
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 21 21:30:34.454: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2wz2" in namespace "subpath-1557" to be "Succeeded or Failed"

    Sep 21 21:30:34.462: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.428293ms
    Sep 21 21:30:36.470: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 2.015458928s
    Sep 21 21:30:38.476: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 4.021853945s
    Sep 21 21:30:40.493: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 6.038379583s
    Sep 21 21:30:42.502: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 8.047497337s
    Sep 21 21:30:44.510: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 10.055723049s
    Sep 21 21:30:46.518: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 12.063904082s
    Sep 21 21:30:48.527: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 14.073072166s
    Sep 21 21:30:50.535: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 16.080523449s
    Sep 21 21:30:52.542: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 18.087776502s
    Sep 21 21:30:54.549: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Running", Reason="", readiness=true. Elapsed: 20.095101234s
    Sep 21 21:30:56.561: INFO: Pod "pod-subpath-test-configmap-2wz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.10641057s
    STEP: Saw pod success
    Sep 21 21:30:56.561: INFO: Pod "pod-subpath-test-configmap-2wz2" satisfied condition "Succeeded or Failed"

    Sep 21 21:30:56.568: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-subpath-test-configmap-2wz2 container test-container-subpath-configmap-2wz2: <nil>
    STEP: delete the pod
    Sep 21 21:30:56.603: INFO: Waiting for pod pod-subpath-test-configmap-2wz2 to disappear
    Sep 21 21:30:56.608: INFO: Pod pod-subpath-test-configmap-2wz2 no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-2wz2
    Sep 21 21:30:56.608: INFO: Deleting pod "pod-subpath-test-configmap-2wz2" in namespace "subpath-1557"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:30:56.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1557" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":64,"skipped":1198,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
    STEP: Destroying namespace "services-6343" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":65,"skipped":1200,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 21 21:30:44.787: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:44.795: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:44.837: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:44.846: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:44.859: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:30:49.878: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:49.887: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:49.922: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:49.930: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:49.946: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:30:54.888: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:54.897: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:54.925: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:54.933: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:54.944: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:30:59.923: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:59.929: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:30:59.995: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:00.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:00.020: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:31:04.882: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:04.889: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:04.928: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:04.935: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:04.950: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:31:09.885: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:09.896: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:09.946: INFO: Unable to read jessie_udp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:09.956: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local from pod dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879: the server could not find the requested resource (get pods dns-test-d509cdd5-b2a6-4659-879f-f363bc493879)
    Sep 21 21:31:09.970: INFO: Lookups using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 failed for: [wheezy_udp@dns-test-service-2.dns-579.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-579.svc.cluster.local jessie_udp@dns-test-service-2.dns-579.svc.cluster.local jessie_tcp@dns-test-service-2.dns-579.svc.cluster.local]

    
    Sep 21 21:31:14.961: INFO: DNS probes using dns-579/dns-test-d509cdd5-b2a6-4659-879f-f363bc493879 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:15.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-579" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":88,"skipped":1524,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:15.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6623" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1204,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-6868-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":89,"skipped":1525,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 21 21:31:22.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e" in namespace "projected-3520" to be "Succeeded or Failed"

    Sep 21 21:31:22.768: INFO: Pod "downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.205052ms
    Sep 21 21:31:24.780: INFO: Pod "downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019573998s
    STEP: Saw pod success
    Sep 21 21:31:24.780: INFO: Pod "downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e" satisfied condition "Succeeded or Failed"

    Sep 21 21:31:24.787: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e container client-container: <nil>
    STEP: delete the pod
    Sep 21 21:31:24.839: INFO: Waiting for pod downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e to disappear
    Sep 21 21:31:24.845: INFO: Pod downwardapi-volume-8af64b9c-7faf-435c-bd25-05ad170f074e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:24.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3520" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1550,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:27.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-3711" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":91,"skipped":1578,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:31:27.224: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 21 21:31:27.298: INFO: Waiting up to 5m0s for pod "pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6" in namespace "emptydir-4938" to be "Succeeded or Failed"

    Sep 21 21:31:27.308: INFO: Pod "pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.718024ms
    Sep 21 21:31:29.319: INFO: Pod "pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020864562s
    STEP: Saw pod success
    Sep 21 21:31:29.320: INFO: Pod "pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6" satisfied condition "Succeeded or Failed"

    Sep 21 21:31:29.338: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:31:29.388: INFO: Waiting for pod pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6 to disappear
    Sep 21 21:31:29.399: INFO: Pod pod-436a5fdd-d0ef-4c6e-a34b-b67600c225d6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:29.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4938" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1605,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:31.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-5450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":93,"skipped":1630,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:31:31.773: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-07e1ebec-6687-4b0d-8f3f-bd7688f8cd65
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:31:31.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41" in namespace "configmap-7285" to be "Succeeded or Failed"

    Sep 21 21:31:31.870: INFO: Pod "pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41": Phase="Pending", Reason="", readiness=false. Elapsed: 11.552036ms
    Sep 21 21:31:33.881: INFO: Pod "pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41": Phase="Running", Reason="", readiness=true. Elapsed: 2.022464389s
    Sep 21 21:31:35.888: INFO: Pod "pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029204662s
    STEP: Saw pod success
    Sep 21 21:31:35.888: INFO: Pod "pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41" satisfied condition "Succeeded or Failed"

    Sep 21 21:31:35.893: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:31:35.939: INFO: Waiting for pod pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41 to disappear
    Sep 21 21:31:35.943: INFO: Pod pod-configmaps-dc741988-e88b-4eb8-92ba-f85a26b55f41 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:35.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7285" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":94,"skipped":1669,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 21 21:31:36.203: INFO: The status of Pod server-envvars-013becbd-e4e5-497a-9f17-aecdfc370bfd is Pending, waiting for it to be Running (with Ready = true)
    Sep 21 21:31:38.211: INFO: The status of Pod server-envvars-013becbd-e4e5-497a-9f17-aecdfc370bfd is Running (Ready = true)
    Sep 21 21:31:38.264: INFO: Waiting up to 5m0s for pod "client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7" in namespace "pods-2871" to be "Succeeded or Failed"

    Sep 21 21:31:38.278: INFO: Pod "client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.203291ms
    Sep 21 21:31:40.289: INFO: Pod "client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024681029s
    STEP: Saw pod success
    Sep 21 21:31:40.289: INFO: Pod "client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7" satisfied condition "Succeeded or Failed"

    Sep 21 21:31:40.299: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7 container env3cont: <nil>
    STEP: delete the pod
    Sep 21 21:31:40.336: INFO: Waiting for pod client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7 to disappear
    Sep 21 21:31:40.344: INFO: Pod client-envvars-270c8412-9b4d-4eb7-ad01-4ac3fd597fa7 no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:40.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-2871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1729,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:31:40.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6218" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":96,"skipped":1746,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:32:02.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-1681" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":97,"skipped":1748,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:32:21.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4371" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":98,"skipped":1763,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:32:21.817: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-c0813715-b438-4b44-a54d-ea5273be3b44
    STEP: Creating a pod to test consume secrets
    Sep 21 21:32:21.912: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af" in namespace "projected-8928" to be "Succeeded or Failed"

    Sep 21 21:32:21.918: INFO: Pod "pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af": Phase="Pending", Reason="", readiness=false. Elapsed: 5.977575ms
    Sep 21 21:32:23.925: INFO: Pod "pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013048016s
    STEP: Saw pod success
    Sep 21 21:32:23.925: INFO: Pod "pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af" satisfied condition "Succeeded or Failed"

    Sep 21 21:32:23.931: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 21 21:32:23.986: INFO: Waiting for pod pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af to disappear
    Sep 21 21:32:23.993: INFO: Pod pod-projected-secrets-7354e9e5-7b15-40a7-bd44-fd279e34e8af no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:32:23.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8928" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":99,"skipped":1767,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:32:24.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-3130" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":100,"skipped":1801,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:144.693 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":101,"skipped":1809,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:243.173 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1210,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:19.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-3760" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":68,"skipped":1273,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:35:19.303: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 21 21:35:19.379: INFO: Waiting up to 5m0s for pod "pod-eb8de134-3281-4996-9196-46d3c5c37805" in namespace "emptydir-8324" to be "Succeeded or Failed"

    Sep 21 21:35:19.385: INFO: Pod "pod-eb8de134-3281-4996-9196-46d3c5c37805": Phase="Pending", Reason="", readiness=false. Elapsed: 5.786434ms
    Sep 21 21:35:21.394: INFO: Pod "pod-eb8de134-3281-4996-9196-46d3c5c37805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014768498s
    STEP: Saw pod success
    Sep 21 21:35:21.394: INFO: Pod "pod-eb8de134-3281-4996-9196-46d3c5c37805" satisfied condition "Succeeded or Failed"

    Sep 21 21:35:21.405: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-eb8de134-3281-4996-9196-46d3c5c37805 container test-container: <nil>
    STEP: delete the pod
    Sep 21 21:35:21.467: INFO: Waiting for pod pod-eb8de134-3281-4996-9196-46d3c5c37805 to disappear
    Sep 21 21:35:21.475: INFO: Pod pod-eb8de134-3281-4996-9196-46d3c5c37805 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:21.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8324" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1293,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:21.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-8534" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1326,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:35:21.790: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep 21 21:35:21.881: INFO: Waiting up to 5m0s for pod "var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313" in namespace "var-expansion-6002" to be "Succeeded or Failed"

    Sep 21 21:35:21.891: INFO: Pod "var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313": Phase="Pending", Reason="", readiness=false. Elapsed: 9.958418ms
    Sep 21 21:35:23.901: INFO: Pod "var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313": Phase="Running", Reason="", readiness=true. Elapsed: 2.019696915s
    Sep 21 21:35:25.908: INFO: Pod "var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026886226s
    STEP: Saw pod success
    Sep 21 21:35:25.908: INFO: Pod "var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313" satisfied condition "Succeeded or Failed"

    Sep 21 21:35:25.915: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313 container dapi-container: <nil>
    STEP: delete the pod
    Sep 21 21:35:25.948: INFO: Waiting for pod var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313 to disappear
    Sep 21 21:35:25.953: INFO: Pod var-expansion-e67ae68c-3b8d-4448-8e34-daeb3ec52313 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:25.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6002" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":71,"skipped":1331,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":83,"skipped":1348,"failed":5,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:28:17.111: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
    [It] should call prestop when killing a pod  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating server pod server in namespace prestop-5296
    STEP: Waiting for pods to come up.
    STEP: Creating tester pod tester in namespace prestop-5296
    STEP: Deleting pre-stop pod
    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    Sep 21 21:35:33.309: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 21 21:35:33.309: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:34.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3099" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":72,"skipped":1344,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:43.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6536" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":73,"skipped":1371,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:35:43.847: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:35:47.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-8434" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":74,"skipped":1371,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:07.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5616" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":75,"skipped":1446,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:12.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3172" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1492,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-t7s6
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 21 21:36:12.335: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t7s6" in namespace "subpath-405" to be "Succeeded or Failed"

    Sep 21 21:36:12.340: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143539ms
    Sep 21 21:36:14.344: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 2.00889395s
    Sep 21 21:36:16.349: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 4.013905059s
    Sep 21 21:36:18.356: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 6.02019578s
    Sep 21 21:36:20.361: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 8.025461939s
    Sep 21 21:36:22.367: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 10.031658089s
    Sep 21 21:36:24.373: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 12.037288173s
    Sep 21 21:36:26.380: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 14.044051087s
    Sep 21 21:36:28.386: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 16.049962144s
    Sep 21 21:36:30.390: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 18.054745198s
    Sep 21 21:36:32.396: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Running", Reason="", readiness=true. Elapsed: 20.060699148s
    Sep 21 21:36:34.403: INFO: Pod "pod-subpath-test-secret-t7s6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.067499983s
    STEP: Saw pod success
    Sep 21 21:36:34.403: INFO: Pod "pod-subpath-test-secret-t7s6" satisfied condition "Succeeded or Failed"

    Sep 21 21:36:34.408: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-f3twbs pod pod-subpath-test-secret-t7s6 container test-container-subpath-secret-t7s6: <nil>
    STEP: delete the pod
    Sep 21 21:36:34.430: INFO: Waiting for pod pod-subpath-test-secret-t7s6 to disappear
    Sep 21 21:36:34.435: INFO: Pod pod-subpath-test-secret-t7s6 no longer exists
    STEP: Deleting pod pod-subpath-test-secret-t7s6
    Sep 21 21:36:34.435: INFO: Deleting pod "pod-subpath-test-secret-t7s6" in namespace "subpath-405"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:34.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-405" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":77,"skipped":1495,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:36.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-5262" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":78,"skipped":1510,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:38.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-649" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":79,"skipped":1537,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:36:38.819: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-843af719-16c6-4ec5-8c14-1cd76c7320ee
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:36:38.892: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28" in namespace "configmap-4697" to be "Succeeded or Failed"

    Sep 21 21:36:38.896: INFO: Pod "pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.816623ms
    Sep 21 21:36:40.901: INFO: Pod "pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008789787s
    STEP: Saw pod success
    Sep 21 21:36:40.901: INFO: Pod "pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28" satisfied condition "Succeeded or Failed"

    Sep 21 21:36:40.904: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-66tdg pod pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:36:40.933: INFO: Waiting for pod pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28 to disappear
    Sep 21 21:36:40.936: INFO: Pod pod-configmaps-a6d31580-fadd-49a5-8e1d-a91a91334d28 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:36:40.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4697" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1540,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-1471-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":81,"skipped":1550,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-3446" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":82,"skipped":1563,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:37:15.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-8092" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":83,"skipped":1584,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:37:15.071: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-4531/configmap-test-0bc82666-22d4-4482-b333-cae2fc5df500
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:37:15.122: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1" in namespace "configmap-4531" to be "Succeeded or Failed"

    Sep 21 21:37:15.126: INFO: Pod "pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259418ms
    Sep 21 21:37:17.133: INFO: Pod "pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010686485s
    STEP: Saw pod success
    Sep 21 21:37:17.133: INFO: Pod "pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1" satisfied condition "Succeeded or Failed"

    Sep 21 21:37:17.138: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-worker-13tw3l pod pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1 container env-test: <nil>
    STEP: delete the pod
    Sep 21 21:37:17.168: INFO: Waiting for pod pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1 to disappear
    Sep 21 21:37:17.172: INFO: Pod pod-configmaps-ab1c66c9-19ef-459c-9513-ed08ac0fdfd1 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:37:17.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4531" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1595,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:37:17.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6134" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":85,"skipped":1620,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 21 21:37:17.394: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-57ca23d4-9a02-4ea1-b7b1-b5c87d1b0924
    STEP: Creating a pod to test consume configMaps
    Sep 21 21:37:17.447: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25" in namespace "projected-9349" to be "Succeeded or Failed"

    Sep 21 21:37:17.451: INFO: Pod "pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03774ms
    Sep 21 21:37:19.456: INFO: Pod "pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008584669s
    STEP: Saw pod success
    Sep 21 21:37:19.456: INFO: Pod "pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25" satisfied condition "Succeeded or Failed"

    Sep 21 21:37:19.459: INFO: Trying to get logs from node k8s-upgrade-and-conformance-kcibnj-md-0-zfvbf-59596bf7b7-dfcb4 pod pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 21 21:37:19.479: INFO: Waiting for pod pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25 to disappear
    Sep 21 21:37:19.484: INFO: Pod pod-projected-configmaps-f967e9a6-2760-4c98-9499-e803c1803a25 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 21 21:37:19.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9349" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1657,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSTEP: Dumping logs from the "k8s-upgrade-and-conformance-kcibnj" workload cluster 09/21/22 21:38:32.545
    STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-tlj9bs" namespace 09/21/22 21:38:36.432
    STEP: Deleting cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj 09/21/22 21:38:36.788
    STEP: Deleting cluster k8s-upgrade-and-conformance-kcibnj 09/21/22 21:38:36.815
    INFO: Waiting for the Cluster k8s-upgrade-and-conformance-tlj9bs/k8s-upgrade-and-conformance-kcibnj to be deleted
... skipping 620 lines ...
  [INTERRUPTED] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 1 of 21 Specs in 3510.502 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped


Ginkgo ran 1 suite in 1h0m24.79317373s

Test Suite Failed
make: *** [Makefile:128: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 26261
++ pgrep -f 'ctr -n moby events'
+ kill 26262
... skipping 23 lines ...