This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-17 00:45
Elapsed1h6m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 895 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 78 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading k8s.io/apimachinery v0.25.0
go: downloading github.com/onsi/gomega v1.20.0
go: downloading k8s.io/api v0.25.0
... skipping 226 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gqwip-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gqwip-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gqwip created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gqwip-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-8gqwip-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt, Cluster k8s-upgrade-and-conformance-yh3rl6/k8s-upgrade-and-conformance-8gqwip: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr, Cluster k8s-upgrade-and-conformance-yh3rl6/k8s-upgrade-and-conformance-8gqwip: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc, Cluster k8s-upgrade-and-conformance-yh3rl6/k8s-upgrade-and-conformance-8gqwip: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-8gqwip-mp-0, Cluster k8s-upgrade-and-conformance-yh3rl6/k8s-upgrade-and-conformance-8gqwip: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/17/22 00:54:09.316
    INFO: Creating namespace k8s-upgrade-and-conformance-yh3rl6
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-yh3rl6"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep 17 01:04:33.359: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 17 01:04:33.363: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep 17 01:04:33.384: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep 17 01:04:33.428: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:33.428: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep 17 01:04:33.428: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:33.428: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:33.428: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:33.429: INFO: 
    Sep 17 01:04:35.450: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:35.450: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep 17 01:04:35.450: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:35.450: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:35.450: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:35.450: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:35.450: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:35.450: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:35.450: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:35.450: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:35.451: INFO: 
    Sep 17 01:04:37.448: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:37.448: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep 17 01:04:37.448: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:37.448: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:37.448: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:37.448: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:37.449: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:37.449: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:37.449: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:37.449: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:37.449: INFO: 
    Sep 17 01:04:39.452: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:39.453: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep 17 01:04:39.453: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:39.453: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:39.453: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:39.453: INFO: 
    Sep 17 01:04:41.452: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:41.452: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep 17 01:04:41.452: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:41.452: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:41.452: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:41.453: INFO: 
    Sep 17 01:04:43.455: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:43.455: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep 17 01:04:43.455: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:43.455: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:43.455: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:43.455: INFO: 
    Sep 17 01:04:45.477: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:45.477: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep 17 01:04:45.477: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:45.477: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:45.477: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:45.477: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:45.477: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:45.477: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:45.477: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:45.477: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:45.478: INFO: 
    Sep 17 01:04:47.534: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:47.535: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep 17 01:04:47.535: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:47.535: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:47.535: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:47.535: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:47.536: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:47.536: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:47.536: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:47.536: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:47.536: INFO: 
    Sep 17 01:04:49.459: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:49.460: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep 17 01:04:49.460: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:49.460: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:49.460: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:49.460: INFO: 
    Sep 17 01:04:51.453: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:51.453: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep 17 01:04:51.453: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:51.453: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:51.453: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:51.453: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:51.453: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:51.454: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:51.454: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:51.454: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:51.454: INFO: 
    Sep 17 01:04:53.447: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:53.448: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
    Sep 17 01:04:53.448: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:53.448: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:53.448: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:53.448: INFO: 
    Sep 17 01:04:55.456: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:55.456: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (22 seconds elapsed)
    Sep 17 01:04:55.456: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:55.456: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:55.456: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:55.456: INFO: 
    Sep 17 01:04:57.450: INFO: The status of Pod coredns-f9fd979d6-5z6z2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: The status of Pod coredns-f9fd979d6-t8z2q is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: The status of Pod kindnet-nb79f is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: The status of Pod kindnet-p4nd6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: The status of Pod kube-proxy-mc2jw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: The status of Pod kube-proxy-znz8z is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:57.450: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (24 seconds elapsed)
    Sep 17 01:04:57.450: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:57.450: INFO: POD                      NODE                                              PHASE    GRACE  CONDITIONS
    Sep 17 01:04:57.450: INFO: coredns-f9fd979d6-5z6z2  k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:13 +0000 UTC  }]
    Sep 17 01:04:57.450: INFO: coredns-f9fd979d6-t8z2q  k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:10 +0000 UTC  }]
    Sep 17 01:04:57.450: INFO: kindnet-nb79f            k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:00 +0000 UTC  }]
    Sep 17 01:04:57.451: INFO: kindnet-p4nd6            k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:56:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 00:55:44 +0000 UTC  }]
    Sep 17 01:04:57.451: INFO: kube-proxy-mc2jw         k8s-upgrade-and-conformance-8gqwip-worker-c3ofup  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:01:48 +0000 UTC  }]
    Sep 17 01:04:57.451: INFO: kube-proxy-znz8z         k8s-upgrade-and-conformance-8gqwip-worker-vbe7tg  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:03:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:02:27 +0000 UTC  }]
    Sep 17 01:04:57.451: INFO: 
    Sep 17 01:04:59.453: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:59.453: INFO: The status of Pod coredns-f9fd979d6-xxpwr is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:04:59.453: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (26 seconds elapsed)
    Sep 17 01:04:59.453: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:04:59.453: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:04:59.453: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:04:59.453: INFO: coredns-f9fd979d6-xxpwr  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:04:59.453: INFO: 
    Sep 17 01:05:01.444: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:01.444: INFO: The status of Pod coredns-f9fd979d6-xxpwr is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:01.444: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (28 seconds elapsed)
    Sep 17 01:05:01.444: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:05:01.444: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:05:01.444: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:01.444: INFO: coredns-f9fd979d6-xxpwr  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:01.444: INFO: 
    Sep 17 01:05:03.448: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:03.449: INFO: The status of Pod coredns-f9fd979d6-xxpwr is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:03.449: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (30 seconds elapsed)
    Sep 17 01:05:03.449: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:05:03.449: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:05:03.449: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:03.449: INFO: coredns-f9fd979d6-xxpwr  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:03.449: INFO: 
    Sep 17 01:05:05.446: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:05.446: INFO: The status of Pod coredns-f9fd979d6-xxpwr is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:05.446: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (32 seconds elapsed)
    Sep 17 01:05:05.446: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:05:05.446: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:05:05.446: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:05.446: INFO: coredns-f9fd979d6-xxpwr  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:05.446: INFO: 
    Sep 17 01:05:07.446: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:07.446: INFO: The status of Pod coredns-f9fd979d6-xxpwr is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:07.446: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (34 seconds elapsed)
    Sep 17 01:05:07.446: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 17 01:05:07.446: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:05:07.446: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:07.446: INFO: coredns-f9fd979d6-xxpwr  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:07.446: INFO: 
    Sep 17 01:05:09.450: INFO: The status of Pod coredns-f9fd979d6-87xj8 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 17 01:05:09.450: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (36 seconds elapsed)
    Sep 17 01:05:09.450: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 17 01:05:09.450: INFO: POD                      NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 17 01:05:09.450: INFO: coredns-f9fd979d6-87xj8  k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-17 01:04:58 +0000 UTC  }]
    Sep 17 01:05:09.450: INFO: 
    Sep 17 01:05:11.444: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (38 seconds elapsed)
... skipping 31 lines ...
    Sep 17 01:05:11.539: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap configmap-3208/configmap-test-4aab8504-2b7b-4c3b-8c0d-08bd5c384802
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:05:11.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a" in namespace "configmap-3208" to be "Succeeded or Failed"

    Sep 17 01:05:11.562: INFO: Pod "pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.729683ms
    Sep 17 01:05:13.570: INFO: Pod "pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011081813s
    Sep 17 01:05:15.574: INFO: Pod "pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015094935s
    STEP: Saw pod success
    Sep 17 01:05:15.574: INFO: Pod "pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:15.577: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a container env-test: <nil>
    STEP: delete the pod
    Sep 17 01:05:15.607: INFO: Waiting for pod pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a to disappear
    Sep 17 01:05:15.610: INFO: Pod pod-configmaps-52827705-6966-42b2-8a03-ddc6df877e6a no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:15.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3208" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-1459-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":1,"skipped":32,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:15.679: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-map-70a36c98-ef3e-4532-99fa-6bb0641b5126
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:05:15.738: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93" in namespace "projected-3264" to be "Succeeded or Failed"

    Sep 17 01:05:15.742: INFO: Pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755426ms
    Sep 17 01:05:17.746: INFO: Pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007836034s
    Sep 17 01:05:19.750: INFO: Pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93": Phase="Running", Reason="", readiness=true. Elapsed: 4.01168367s
    Sep 17 01:05:21.754: INFO: Pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015334985s
    STEP: Saw pod success
    Sep 17 01:05:21.754: INFO: Pod "pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:21.756: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:05:21.771: INFO: Waiting for pod pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93 to disappear
    Sep 17 01:05:21.777: INFO: Pod pod-projected-configmaps-3f75128c-7f45-4271-9def-cfb09c53ad93 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:21.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3264" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:21.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-2129" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:27.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1462" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":1,"skipped":60,"failed":0}

    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:27.747: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:29.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6433" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":60,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:05:29.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491" in namespace "projected-3671" to be "Succeeded or Failed"

    Sep 17 01:05:29.982: INFO: Pod "downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491": Phase="Pending", Reason="", readiness=false. Elapsed: 7.34522ms
    Sep 17 01:05:31.986: INFO: Pod "downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01154214s
    Sep 17 01:05:33.991: INFO: Pod "downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016083228s
    STEP: Saw pod success
    Sep 17 01:05:33.991: INFO: Pod "downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:33.995: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:05:34.030: INFO: Waiting for pod downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491 to disappear
    Sep 17 01:05:34.034: INFO: Pod downwardapi-volume-fc1132f9-fd5a-4661-abcf-41aa891db491 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:34.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3671" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":92,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:34.063: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name projected-secret-test-map-cb95cbab-91fb-48e4-8694-a9756e805757
    STEP: Creating a pod to test consume secrets
    Sep 17 01:05:34.102: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665" in namespace "projected-6103" to be "Succeeded or Failed"

    Sep 17 01:05:34.108: INFO: Pod "pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031451ms
    Sep 17 01:05:36.112: INFO: Pod "pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00965009s
    STEP: Saw pod success
    Sep 17 01:05:36.112: INFO: Pod "pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:36.115: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:05:36.139: INFO: Waiting for pod pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665 to disappear
    Sep 17 01:05:36.141: INFO: Pod pod-projected-secrets-040751a0-6c0f-48c0-9bed-71b697d18665 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:36.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6103" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":103,"failed":0}

    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:36.150: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-92d92982-ec55-48a8-b490-fa3b59795daa
    STEP: Creating a pod to test consume secrets
    Sep 17 01:05:36.189: INFO: Waiting up to 5m0s for pod "pod-secrets-073630be-332c-4130-95f2-02b8aa423bde" in namespace "secrets-5133" to be "Succeeded or Failed"

    Sep 17 01:05:36.192: INFO: Pod "pod-secrets-073630be-332c-4130-95f2-02b8aa423bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580688ms
    Sep 17 01:05:38.196: INFO: Pod "pod-secrets-073630be-332c-4130-95f2-02b8aa423bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006705866s
    STEP: Saw pod success
    Sep 17 01:05:38.196: INFO: Pod "pod-secrets-073630be-332c-4130-95f2-02b8aa423bde" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:38.199: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-secrets-073630be-332c-4130-95f2-02b8aa423bde container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:05:38.215: INFO: Waiting for pod pod-secrets-073630be-332c-4130-95f2-02b8aa423bde to disappear
    Sep 17 01:05:38.219: INFO: Pod pod-secrets-073630be-332c-4130-95f2-02b8aa423bde no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:38.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5133" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":103,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:38.290: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-103e37a4-04f5-4faf-811d-08a7ab1d9494
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:05:38.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2" in namespace "configmap-5204" to be "Succeeded or Failed"

    Sep 17 01:05:38.332: INFO: Pod "pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.749969ms
    Sep 17 01:05:40.336: INFO: Pod "pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006945303s
    STEP: Saw pod success
    Sep 17 01:05:40.336: INFO: Pod "pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:40.339: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:05:40.355: INFO: Waiting for pod pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2 to disappear
    Sep 17 01:05:40.361: INFO: Pod pod-configmaps-d6a6c23a-ca03-4194-a57a-1bdc7e8fe3c2 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:40.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5204" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":151,"failed":0}

    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:40.371: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir-wrapper
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-4737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":7,"skipped":151,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating pod pod-subpath-test-configmap-rsrh
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 17 01:05:20.680: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rsrh" in namespace "subpath-4342" to be "Succeeded or Failed"

    Sep 17 01:05:20.683: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983855ms
    Sep 17 01:05:22.687: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 2.006924195s
    Sep 17 01:05:24.691: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 4.011727312s
    Sep 17 01:05:26.695: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 6.01582962s
    Sep 17 01:05:28.699: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 8.019711557s
    Sep 17 01:05:30.704: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 10.023962039s
    Sep 17 01:05:32.707: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 12.027646723s
    Sep 17 01:05:34.712: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 14.032254829s
    Sep 17 01:05:36.716: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 16.036107162s
    Sep 17 01:05:38.722: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 18.04272224s
    Sep 17 01:05:40.727: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Running", Reason="", readiness=true. Elapsed: 20.04713251s
    Sep 17 01:05:42.731: INFO: Pod "pod-subpath-test-configmap-rsrh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051086308s
    STEP: Saw pod success
    Sep 17 01:05:42.731: INFO: Pod "pod-subpath-test-configmap-rsrh" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:42.733: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-subpath-test-configmap-rsrh container test-container-subpath-configmap-rsrh: <nil>
    STEP: delete the pod
    Sep 17 01:05:42.748: INFO: Waiting for pod pod-subpath-test-configmap-rsrh to disappear
    Sep 17 01:05:42.751: INFO: Pod pod-subpath-test-configmap-rsrh no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-rsrh
    Sep 17 01:05:42.751: INFO: Deleting pod "pod-subpath-test-configmap-rsrh" in namespace "subpath-4342"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:42.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-4342" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:42.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-4847" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":59,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:42.885: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-5fcbf9d8-6186-4cfa-b844-cc62102360fb
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:05:42.930: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50" in namespace "projected-9447" to be "Succeeded or Failed"

    Sep 17 01:05:42.933: INFO: Pod "pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210619ms
    Sep 17 01:05:44.938: INFO: Pod "pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007614803s
    STEP: Saw pod success
    Sep 17 01:05:44.938: INFO: Pod "pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:44.941: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:05:44.963: INFO: Waiting for pod pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50 to disappear
    Sep 17 01:05:44.968: INFO: Pod pod-projected-configmaps-92de2f54-427d-4a62-b161-9b50eeb6ad50 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:44.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9447" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":66,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:44.998: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 17 01:05:45.038: INFO: Waiting up to 5m0s for pod "pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e" in namespace "emptydir-4125" to be "Succeeded or Failed"

    Sep 17 01:05:45.042: INFO: Pod "pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.11213ms
    Sep 17 01:05:47.045: INFO: Pod "pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006412901s
    STEP: Saw pod success
    Sep 17 01:05:47.045: INFO: Pod "pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e" satisfied condition "Succeeded or Failed"

    Sep 17 01:05:47.048: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:05:47.063: INFO: Waiting for pod pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e to disappear
    Sep 17 01:05:47.068: INFO: Pod pod-d62eebea-1bb5-4321-8cb7-a7eed35b217e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:47.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4125" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":77,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:48.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3095" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":173,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:47.090: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:05:47.129: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-580d858d-e0d8-4140-8954-719100c29269" in namespace "security-context-test-3136" to be "Succeeded or Failed"

    Sep 17 01:05:47.133: INFO: Pod "busybox-readonly-false-580d858d-e0d8-4140-8954-719100c29269": Phase="Pending", Reason="", readiness=false. Elapsed: 3.056728ms
    Sep 17 01:05:49.137: INFO: Pod "busybox-readonly-false-580d858d-e0d8-4140-8954-719100c29269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007522052s
    Sep 17 01:05:49.137: INFO: Pod "busybox-readonly-false-580d858d-e0d8-4140-8954-719100c29269" satisfied condition "Succeeded or Failed"

    [AfterEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:49.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3136" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":86,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:48.664: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:05:50.725: INFO: Deleting pod "var-expansion-3001301b-cda5-4143-98e5-f4ea3620ef6d" in namespace "var-expansion-8253"
    Sep 17 01:05:50.729: INFO: Wait up to 5m0s for pod "var-expansion-3001301b-cda5-4143-98e5-f4ea3620ef6d" to be fully deleted
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:05:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8253" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":9,"skipped":195,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:13.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2971" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":10,"skipped":215,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:24.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-6490" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":11,"skipped":225,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:26.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7913" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":244,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [k8s.io] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:30.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1781" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":248,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:34.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6677" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":14,"skipped":261,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:40.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2881" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":15,"skipped":263,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:06:40.699: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-map-a72fe6ff-b283-4ae0-9389-3e23cbcdcb2d
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:06:40.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499" in namespace "configmap-9059" to be "Succeeded or Failed"

    Sep 17 01:06:40.744: INFO: Pod "pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.814149ms
    Sep 17 01:06:42.748: INFO: Pod "pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006578499s
    STEP: Saw pod success
    Sep 17 01:06:42.748: INFO: Pod "pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499" satisfied condition "Succeeded or Failed"

    Sep 17 01:06:42.751: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:06:42.766: INFO: Waiting for pod pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499 to disappear
    Sep 17 01:06:42.768: INFO: Pod pod-configmaps-4a88be6c-af21-4582-8a8d-bed0bd7c3499 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:42.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9059" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":268,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:44.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1794" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":17,"skipped":279,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:06:44.968: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test override all
    Sep 17 01:06:45.001: INFO: Waiting up to 5m0s for pod "client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91" in namespace "containers-2885" to be "Succeeded or Failed"

    Sep 17 01:06:45.003: INFO: Pod "client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394281ms
    Sep 17 01:06:47.008: INFO: Pod "client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007010204s
    STEP: Saw pod success
    Sep 17 01:06:47.008: INFO: Pod "client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91" satisfied condition "Succeeded or Failed"

    Sep 17 01:06:47.011: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:06:47.024: INFO: Waiting for pod client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91 to disappear
    Sep 17 01:06:47.028: INFO: Pod client-containers-35e795cc-adfc-499a-9a13-7f95bdb6da91 no longer exists
    [AfterEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2885" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":302,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:49.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-4918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":7,"skipped":89,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:06:49.294: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name secret-emptykey-test-19a8ca7a-7a19-4c84-afce-83d7cbc011a0
    [AfterEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:49.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-43" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":8,"skipped":108,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:54.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-3865" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":19,"skipped":361,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:06:54.321: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap that has name configmap-test-emptyKey-d80b8022-cb28-477e-84a3-d2bec0b5e3a8
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:54.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-288" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":20,"skipped":369,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:06:54.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5403" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":21,"skipped":375,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:05:21.934: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep 17 01:07:22.485: INFO: Successfully updated pod "var-expansion-e60754be-77de-425d-af06-ce25bb57842c"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep 17 01:07:24.493: INFO: Deleting pod "var-expansion-e60754be-77de-425d-af06-ce25bb57842c" in namespace "var-expansion-551"
    Sep 17 01:07:24.498: INFO: Wait up to 5m0s for pod "var-expansion-e60754be-77de-425d-af06-ce25bb57842c" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:164.583 seconds]
    [k8s.io] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":4,"skipped":105,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:08:06.531: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 17 01:08:06.576: INFO: Waiting up to 5m0s for pod "pod-cd88fdd3-84d0-41c1-95d2-0a848570d803" in namespace "emptydir-9627" to be "Succeeded or Failed"

    Sep 17 01:08:06.580: INFO: Pod "pod-cd88fdd3-84d0-41c1-95d2-0a848570d803": Phase="Pending", Reason="", readiness=false. Elapsed: 3.44139ms
    Sep 17 01:08:08.585: INFO: Pod "pod-cd88fdd3-84d0-41c1-95d2-0a848570d803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008551936s
    STEP: Saw pod success
    Sep 17 01:08:08.585: INFO: Pod "pod-cd88fdd3-84d0-41c1-95d2-0a848570d803" satisfied condition "Succeeded or Failed"

    Sep 17 01:08:08.590: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-cd88fdd3-84d0-41c1-95d2-0a848570d803 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:08:08.613: INFO: Waiting for pod pod-cd88fdd3-84d0-41c1-95d2-0a848570d803 to disappear
    Sep 17 01:08:08.616: INFO: Pod pod-cd88fdd3-84d0-41c1-95d2-0a848570d803 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:08:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9627" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":111,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 104 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:08:31.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1740" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":9,"skipped":112,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 64 lines ...
    STEP: Destroying namespace "services-2568" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":140,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-8544-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":143,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:08:53.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2" in namespace "projected-9959" to be "Succeeded or Failed"

    Sep 17 01:08:53.839: INFO: Pod "downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674411ms
    Sep 17 01:08:55.843: INFO: Pod "downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007594927s
    STEP: Saw pod success
    Sep 17 01:08:55.843: INFO: Pod "downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2" satisfied condition "Succeeded or Failed"

    Sep 17 01:08:55.846: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:08:55.871: INFO: Waiting for pod downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2 to disappear
    Sep 17 01:08:55.874: INFO: Pod downwardapi-volume-b4bec7de-41cd-4197-b226-72222df679a2 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:08:55.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9959" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":147,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-7819" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":13,"skipped":166,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:08:56.023: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-2ee0e52c-715d-48e1-8947-da260acb883d
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:08:56.060: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be" in namespace "projected-6844" to be "Succeeded or Failed"

    Sep 17 01:08:56.064: INFO: Pod "pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09528ms
    Sep 17 01:08:58.067: INFO: Pod "pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006494999s
    STEP: Saw pod success
    Sep 17 01:08:58.067: INFO: Pod "pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be" satisfied condition "Succeeded or Failed"

    Sep 17 01:08:58.070: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:08:58.091: INFO: Waiting for pod pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be to disappear
    Sep 17 01:08:58.094: INFO: Pod pod-projected-configmaps-299f2989-776e-479f-90d5-9e8c333333be no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:08:58.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6844" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":203,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 19 lines ...
    • [SLOW TEST:244.618 seconds]
    [k8s.io] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [k8s.io] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:09:16.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7725" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:09:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-9288" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":270,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating service endpoint-test2 in namespace services-581
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-581 to expose endpoints map[]
    Sep 17 01:09:16.220: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep 17 01:09:17.230: INFO: successfully validated that service endpoint-test2 in namespace services-581 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-581
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-581 to expose endpoints map[pod1:[80]]
    Sep 17 01:09:19.255: INFO: successfully validated that service endpoint-test2 in namespace services-581 exposes endpoints map[pod1:[80]]
    STEP: Creating pod pod2 in namespace services-581
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-581 to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:09:22.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6717" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":300,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-689-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":17,"skipped":305,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

    [BeforeEach] [k8s.io] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:09:21.475: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:09:26.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-5564" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:09:26.626: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name projected-secret-test-8a516f36-f596-40f1-848f-0502dc7f78db
    STEP: Creating a pod to test consume secrets
    Sep 17 01:09:26.695: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8" in namespace "projected-8836" to be "Succeeded or Failed"

    Sep 17 01:09:26.698: INFO: Pod "pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382018ms
    Sep 17 01:09:28.702: INFO: Pod "pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007433843s
    STEP: Saw pod success
    Sep 17 01:09:28.702: INFO: Pod "pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8" satisfied condition "Succeeded or Failed"

    Sep 17 01:09:28.705: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:09:28.720: INFO: Waiting for pod pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8 to disappear
    Sep 17 01:09:28.724: INFO: Pod pod-projected-secrets-720ff327-325d-47d3-93b5-e5121770bdf8 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 29 lines ...
    STEP: Destroying namespace "crd-webhook-4627" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":18,"skipped":321,"failed":0}

    [BeforeEach] [sig-api-machinery] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:09:31.327: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename events
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:09:31.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-9762" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":19,"skipped":321,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "crd-webhook-4217" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":20,"skipped":337,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:09:28.735: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    Sep 17 01:09:29.443: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created
    Sep 17 01:09:31.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep 17 01:09:33.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798973769, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep 17 01:10:35.710: INFO: Waited 1m0.203661583s for the sample-apiserver to be ready to handle requests.
    Sep 17 01:10:35.710: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"32e3a39e-6a50-403d-8139-1daed573e666","resourceVersion":"5682","creationTimestamp":"2022-09-17T01:09:35Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-17T01:09:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-17T01:09:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},"spec":{"service":{"namespace":"aggregator-7958","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09URTNNREV3T1RJNVdoY05Nekl3T1RFME1ERXdPVEk1V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURDRDRVWUtsVHZOVkIrQU9JYmxDelFxRVMwVjg0aXMzT0xmUWZpMzhRS1Ftc04KSW5CWXlhSmRVS3I3a0t0S2lFTm00WXh4T3E5cU1LSHRLZE55TTQvN2lxdXE4Y0VyVENKbWJIQlQ0VU8xNFdCbwpJN0Z0RXQ5aVVaTHIwSUp2OXBFUUJYVm1RZ253dDdGWTVsbmh3WU9IcW5uK2IxOWdyVWNUY25GYitjRHYxanlNCkRSVDQ2RHpLTkVCL0NjZ1pyTVptVjVVRDdRR0ZoR1dzZndHVFpGNGRxNERkQlRJQVc1K1NvVmxXaFdRYnRmQ00KN0VYUHZuaHF5a3Iyalh5dmk4d0YxZVF3cGp1T3ZsL2RqelFlUmN5MWk2S2RlUzYvNTkxcGhQUjlWWnFRY2t1RQowM3BUM2o4d0F0R2dJeWZaL1hHRGlwOUVwVE8vbXVFcUxLZ3F1RUk5QWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTZ2JPdWJtOVVVOFZlNnpzczkKZ2g0SHN6ZG0yakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQTVLWHNBSmU1Vk4yMkxXZjVURXFnb3lmeldXQwpkYU1jTHZSakx4bEM1Sy9Ja3VqdkNRa1l4emROUjZVTzlZVGxMWlJRang0dmdZVHVsczc0MmJRS2xyMVp4eE1sCmw3L2wwZVVhTGR6WllweEdmaDJrVm92UkI2ZHFhdVhaaTdxRVdXbzIwME9Pa2RIQjl2bFZzWnl0RVBrdE9JN3gKU0xRaEswWWtQUHpWQVBwRzNGK3BmZFdFaTAvU1p4VWdOTi93VHBkLzUrazNSTHhNQjVrZXRLYUNXNFpzR1RCdwo2OGI5SC9OSlRUN25GcDFqenp1Y0Vxa1FsbUU5eDFXVVlpOWNVUzZlcTNpb1gvQ2pySk1WSGNObHErYlpZZWlUCjI5Z0lyaFdSSS92aVZ2KzRaSGU0VUVwd2V4VUVZRURISnpCR2d2bkV5R2g0WnUvanhQTWd0U2s0NXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-17T01:09:35Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.142.108.123:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.142.108.123:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep 17 01:10:35.712: INFO: current pods: {"metadata":{"resourceVersion":"5691"},"items":[{"metadata":{"name":"sample-apiserver-deployment-67dc674868-ss6dn","generateName":"sample-apiserver-deployment-67dc674868-","namespace":"aggregator-7958","uid":"130fa0a7-07e5-4013-a74c-9932c04acc92","resourceVersion":"5361","creationTimestamp":"2022-09-17T01:09:29Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"67dc674868"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-67dc674868","uid":"c787f0d7-9a08-4278-baf3-01377791ca9b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-17T01:09:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c787f0d7-9a08-4278-baf3-01377791ca9b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-17T01:09:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-z8np9","secret":{"secretName":"default-token-z8np9","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-z8np9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"default-token-z8np9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-8gqwip-worker-08uw3p","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-17T01:09:29Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-17T01:09:35Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-17T01:09:35Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-17T01:09:29Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.16","podIPs":[{"ip":"192.168.2.16"}],"startTime":"2022-09-17T01:09:29Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-17T01:09:35Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","containerID":"containerd://73fa4f5f302f7b42164f1fd8c1d362b7c578c2dae24a8332cdf817338899fd71","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-17T01:09:31Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","imageID":"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55","containerID":"containerd://fbebebc04fad52f35caac395c727a751f5dac850267d53c8639bde2ae7f5fe79","started":true}],"qosClass":"BestEffort"}}]}
    Sep 17 01:10:35.730: INFO: logs of sample-apiserver-deployment-67dc674868-ss6dn/sample-apiserver (error: <nil>): I0917 01:09:32.583760       1 client.go:361] parsed scheme: "endpoint"

    I0917 01:09:32.583954       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0917 01:09:32.584643       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0917 01:09:32.591423       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    W0917 01:09:32.591493       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0917 01:09:32.622031       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0917 01:09:32.622057       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0917 01:09:32.623916       1 client.go:361] parsed scheme: "endpoint"
    I0917 01:09:32.623970       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0917 01:09:32.627569       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0917 01:09:33.585351       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0917 01:09:33.628141       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0917 01:09:35.069204       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0917 01:09:37.504520       1 client.go:361] parsed scheme: "endpoint"
    I0917 01:09:37.504576       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0917 01:09:37.506460       1 client.go:361] parsed scheme: "endpoint"
    I0917 01:09:37.506502       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0917 01:09:37.507852       1 client.go:361] parsed scheme: "endpoint"
    I0917 01:09:37.507895       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
    I0917 01:09:37.593158       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
    I0917 01:09:37.593267       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0917 01:09:37.593317       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0917 01:09:37.693418       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0917 01:09:37.693574       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    
    Sep 17 01:10:35.736: INFO: logs of sample-apiserver-deployment-67dc674868-ss6dn/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-17 01:09:35.138770 I | etcdmain: etcd Version: 3.4.13
    2022-09-17 01:09:35.138928 I | etcdmain: Git SHA: ae9734ed2
    2022-09-17 01:09:35.138933 I | etcdmain: Go Version: go1.12.17
    2022-09-17 01:09:35.138937 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-17 01:09:35.138941 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-17 01:09:35.138949 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-17 01:09:35.956079 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-17 01:09:35.956147 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-17 01:09:35.956193 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-17 01:09:35.956276 I | embed: ready to serve client requests
    2022-09-17 01:09:35.957205 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep 17 01:10:35.736: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc0002ee1f0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:10:35.736: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc0002ee1f0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:404
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":5,"skipped":23,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:10:36.048: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:10:45.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-7013" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":23,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:10:45.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-283" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":43,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:10:45.780: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 17 01:10:45.819: INFO: Waiting up to 5m0s for pod "pod-d228d871-d627-4285-abdf-e771f840cde7" in namespace "emptydir-1783" to be "Succeeded or Failed"

    Sep 17 01:10:45.823: INFO: Pod "pod-d228d871-d627-4285-abdf-e771f840cde7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888556ms
    Sep 17 01:10:47.826: INFO: Pod "pod-d228d871-d627-4285-abdf-e771f840cde7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00660177s
    STEP: Saw pod success
    Sep 17 01:10:47.826: INFO: Pod "pod-d228d871-d627-4285-abdf-e771f840cde7" satisfied condition "Succeeded or Failed"

    Sep 17 01:10:47.830: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-d228d871-d627-4285-abdf-e771f840cde7 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:10:47.845: INFO: Waiting for pod pod-d228d871-d627-4285-abdf-e771f840cde7 to disappear
    Sep 17 01:10:47.848: INFO: Pod pod-d228d871-d627-4285-abdf-e771f840cde7 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:10:47.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":49,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 75 lines ...
    • [SLOW TEST:235.168 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":395,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:10:51.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8400" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":425,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:10:53.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-4840" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":52,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:11:30.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-9202" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":24,"skipped":432,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:11:30.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d" in namespace "projected-4748" to be "Succeeded or Failed"

    Sep 17 01:11:30.220: INFO: Pod "downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.716014ms
    Sep 17 01:11:32.225: INFO: Pod "downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006868377s
    STEP: Saw pod success
    Sep 17 01:11:32.225: INFO: Pod "downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d" satisfied condition "Succeeded or Failed"

    Sep 17 01:11:32.228: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:11:32.248: INFO: Waiting for pod downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d to disappear
    Sep 17 01:11:32.251: INFO: Pod downwardapi-volume-2fa77d0d-4387-4652-a0fc-9f27e4994d0d no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:11:32.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":490,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:11:32.376: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward api env vars
    Sep 17 01:11:32.415: INFO: Waiting up to 5m0s for pod "downward-api-62e182d0-9ee6-43bb-af06-2b305730a116" in namespace "downward-api-684" to be "Succeeded or Failed"

    Sep 17 01:11:32.419: INFO: Pod "downward-api-62e182d0-9ee6-43bb-af06-2b305730a116": Phase="Pending", Reason="", readiness=false. Elapsed: 3.544615ms
    Sep 17 01:11:34.423: INFO: Pod "downward-api-62e182d0-9ee6-43bb-af06-2b305730a116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007457496s
    STEP: Saw pod success
    Sep 17 01:11:34.423: INFO: Pod "downward-api-62e182d0-9ee6-43bb-af06-2b305730a116" satisfied condition "Succeeded or Failed"

    Sep 17 01:11:34.426: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downward-api-62e182d0-9ee6-43bb-af06-2b305730a116 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:11:34.446: INFO: Waiting for pod downward-api-62e182d0-9ee6-43bb-af06-2b305730a116 to disappear
    Sep 17 01:11:34.449: INFO: Pod downward-api-62e182d0-9ee6-43bb-af06-2b305730a116 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:11:34.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-684" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":557,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:12:35.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-7428" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":27,"skipped":568,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:12:37.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-1119" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":589,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:12:50.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1670" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":617,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 5 lines ...
    STEP: create the deployment
    STEP: Wait for the Deployment to create new ReplicaSet
    STEP: delete the deployment
    STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
    STEP: Gathering metrics
    W0917 01:08:09.788652      16 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:13:09.796: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    [AfterEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:13:09.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9894" for this suite.
    
    
    • [SLOW TEST:301.116 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":6,"skipped":143,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:13:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2114" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":7,"skipped":223,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 29 lines ...
    STEP: Destroying namespace "services-5923" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":8,"skipped":227,"failed":0}

    [BeforeEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:13:20.286: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:13:22.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7581" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":227,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 67 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:13:32.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3082" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":10,"skipped":240,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 6 lines ...
    STEP: create the rc2
    STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
    STEP: delete the rc simpletest-rc-to-be-deleted
    STEP: wait for the rc to be deleted
    STEP: Gathering metrics
    W0917 01:09:46.768079      15 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:14:46.775: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    Sep 17 01:14:46.775: INFO: Deleting pod "simpletest-rc-to-be-deleted-57lsp" in namespace "gc-6321"
    Sep 17 01:14:46.788: INFO: Deleting pod "simpletest-rc-to-be-deleted-85c4f" in namespace "gc-6321"
    Sep 17 01:14:46.808: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8gwf" in namespace "gc-6321"
    Sep 17 01:14:46.836: INFO: Deleting pod "simpletest-rc-to-be-deleted-j7lkk" in namespace "gc-6321"
    Sep 17 01:14:46.859: INFO: Deleting pod "simpletest-rc-to-be-deleted-jsn5h" in namespace "gc-6321"
    [AfterEach] [sig-api-machinery] Garbage collector
... skipping 5 lines ...
    • [SLOW TEST:310.553 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":21,"skipped":394,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-7884-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":22,"skipped":396,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:14:53.026: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 17 01:14:53.104: INFO: Waiting up to 5m0s for pod "pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a" in namespace "emptydir-3719" to be "Succeeded or Failed"

    Sep 17 01:14:53.108: INFO: Pod "pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043054ms
    Sep 17 01:14:55.113: INFO: Pod "pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008703102s
    STEP: Saw pod success
    Sep 17 01:14:55.113: INFO: Pod "pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a" satisfied condition "Succeeded or Failed"

    Sep 17 01:14:55.120: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:14:55.161: INFO: Waiting for pod pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a to disappear
    Sep 17 01:14:55.167: INFO: Pod pod-6fb34e99-cc72-4bd2-bb79-24d18ac4ec5a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:14:57.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2923" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":324,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:14:57.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a" in namespace "projected-6037" to be "Succeeded or Failed"

    Sep 17 01:14:57.318: INFO: Pod "downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621821ms
    Sep 17 01:14:59.325: INFO: Pod "downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012710493s
    STEP: Saw pod success
    Sep 17 01:14:59.326: INFO: Pod "downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a" satisfied condition "Succeeded or Failed"

    Sep 17 01:14:59.332: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:14:59.380: INFO: Waiting for pod downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a to disappear
    Sep 17 01:14:59.387: INFO: Pod downwardapi-volume-7a67f2dc-60a0-4aa9-a9f5-83521e9f7d8a no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:14:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6037" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":340,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":420,"failed":0}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:14:55.194: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-4594-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":24,"skipped":420,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:14:59.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a" in namespace "downward-api-9505" to be "Succeeded or Failed"

    Sep 17 01:14:59.604: INFO: Pod "downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344594ms
    Sep 17 01:15:01.610: INFO: Pod "downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a": Phase="Running", Reason="", readiness=true. Elapsed: 2.016603812s
    Sep 17 01:15:03.616: INFO: Pod "downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023144111s
    STEP: Saw pod success
    Sep 17 01:15:03.617: INFO: Pod "downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:03.621: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:15:03.651: INFO: Waiting for pod downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a to disappear
    Sep 17 01:15:03.660: INFO: Pod downwardapi-volume-47ffe298-1e95-48c1-81d2-636dbc02f46a no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:03.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9505" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":381,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:15:03.864: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:15:05.946: INFO: Deleting pod "var-expansion-3303d2cf-c730-4cd6-b2b0-b659a5727cf3" in namespace "var-expansion-5281"
    Sep 17 01:15:05.955: INFO: Wait up to 5m0s for pod "var-expansion-3303d2cf-c730-4cd6-b2b0-b659a5727cf3" to be fully deleted
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:09.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5281" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":14,"skipped":438,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:15:09.991: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:15:10.048: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f" in namespace "security-context-test-896" to be "Succeeded or Failed"

    Sep 17 01:15:10.053: INFO: Pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340341ms
    Sep 17 01:15:12.057: INFO: Pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f": Phase="Running", Reason="", readiness=true. Elapsed: 2.007698551s
    Sep 17 01:15:14.061: INFO: Pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012450025s
    Sep 17 01:15:14.062: INFO: Pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:14.067: INFO: Got logs for pod "busybox-privileged-false-46d31624-9b01-4938-b322-0a8a59168e9f": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:14.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-896" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":446,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:14.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4239" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":16,"skipped":447,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:15:14.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9" in namespace "projected-3851" to be "Succeeded or Failed"

    Sep 17 01:15:14.275: INFO: Pod "downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664174ms
    Sep 17 01:15:16.279: INFO: Pod "downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007710883s
    STEP: Saw pod success
    Sep 17 01:15:16.279: INFO: Pod "downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:16.282: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:15:16.300: INFO: Waiting for pod downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9 to disappear
    Sep 17 01:15:16.302: INFO: Pod downwardapi-volume-ff251794-3324-4958-8ace-0c43fe203ed9 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:16.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3851" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":451,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:23.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4777" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":18,"skipped":468,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:24.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5125" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":19,"skipped":489,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:25.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4612" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":20,"skipped":574,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:15:25.245: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap configmap-5215/configmap-test-a40d7c92-f7e5-4133-8dbb-424f6e377bd0
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:15:25.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9" in namespace "configmap-5215" to be "Succeeded or Failed"

    Sep 17 01:15:25.326: INFO: Pod "pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288333ms
    Sep 17 01:15:27.331: INFO: Pod "pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009054783s
    STEP: Saw pod success
    Sep 17 01:15:27.331: INFO: Pod "pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:27.335: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9 container env-test: <nil>
    STEP: delete the pod
    Sep 17 01:15:27.354: INFO: Waiting for pod pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9 to disappear
    Sep 17 01:15:27.357: INFO: Pod pod-configmaps-fec95f45-361b-4cd0-a03f-ed7c037d3fc9 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":613,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:31.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-65" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":619,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:15:31.535: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating secret secrets-4226/secret-test-fb6d36aa-0416-4b8c-8e38-804577c948e5
    STEP: Creating a pod to test consume secrets
    Sep 17 01:15:31.578: INFO: Waiting up to 5m0s for pod "pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4" in namespace "secrets-4226" to be "Succeeded or Failed"

    Sep 17 01:15:31.580: INFO: Pod "pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121444ms
    Sep 17 01:15:33.584: INFO: Pod "pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005768052s
    STEP: Saw pod success
    Sep 17 01:15:33.584: INFO: Pod "pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:33.587: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4 container env-test: <nil>
    STEP: delete the pod
    Sep 17 01:15:33.603: INFO: Waiting for pod pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4 to disappear
    Sep 17 01:15:33.605: INFO: Pod pod-configmaps-c097978d-0e7b-45a9-abe5-ca4b5e2284a4 no longer exists
    [AfterEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:33.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4226" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":629,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep 17 01:15:13.231: INFO: Unable to read jessie_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.243: INFO: Unable to read jessie_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.247: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.251: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.260: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:13.283: INFO: Lookups using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3178 wheezy_tcp@dns-test-service.dns-3178 wheezy_udp@dns-test-service.dns-3178.svc wheezy_tcp@dns-test-service.dns-3178.svc wheezy_udp@_http._tcp.dns-test-service.dns-3178.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3178.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3178 jessie_tcp@dns-test-service.dns-3178 jessie_udp@dns-test-service.dns-3178.svc jessie_tcp@dns-test-service.dns-3178.svc jessie_udp@_http._tcp.dns-test-service.dns-3178.svc jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc]

    
    Sep 17 01:15:18.288: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.291: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.295: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.298: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.302: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
... skipping 5 lines ...
    Sep 17 01:15:18.339: INFO: Unable to read jessie_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.346: INFO: Unable to read jessie_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.353: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.356: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.360: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:18.380: INFO: Lookups using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3178 wheezy_tcp@dns-test-service.dns-3178 wheezy_udp@dns-test-service.dns-3178.svc wheezy_tcp@dns-test-service.dns-3178.svc wheezy_udp@_http._tcp.dns-test-service.dns-3178.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3178.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3178 jessie_tcp@dns-test-service.dns-3178 jessie_udp@dns-test-service.dns-3178.svc jessie_tcp@dns-test-service.dns-3178.svc jessie_udp@_http._tcp.dns-test-service.dns-3178.svc jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc]

    
    Sep 17 01:15:23.288: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.292: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.296: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.299: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.302: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
... skipping 5 lines ...
    Sep 17 01:15:23.347: INFO: Unable to read jessie_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.350: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.354: INFO: Unable to read jessie_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.361: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:23.385: INFO: Lookups using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3178 wheezy_tcp@dns-test-service.dns-3178 wheezy_udp@dns-test-service.dns-3178.svc wheezy_tcp@dns-test-service.dns-3178.svc wheezy_udp@_http._tcp.dns-test-service.dns-3178.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3178.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3178 jessie_tcp@dns-test-service.dns-3178 jessie_udp@dns-test-service.dns-3178.svc jessie_tcp@dns-test-service.dns-3178.svc jessie_udp@_http._tcp.dns-test-service.dns-3178.svc jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc]

    
    Sep 17 01:15:28.289: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.292: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.296: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.300: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.310: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
... skipping 5 lines ...
    Sep 17 01:15:28.357: INFO: Unable to read jessie_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.360: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.364: INFO: Unable to read jessie_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.370: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.373: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:28.391: INFO: Lookups using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3178 wheezy_tcp@dns-test-service.dns-3178 wheezy_udp@dns-test-service.dns-3178.svc wheezy_tcp@dns-test-service.dns-3178.svc wheezy_udp@_http._tcp.dns-test-service.dns-3178.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3178.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3178 jessie_tcp@dns-test-service.dns-3178 jessie_udp@dns-test-service.dns-3178.svc jessie_tcp@dns-test-service.dns-3178.svc jessie_udp@_http._tcp.dns-test-service.dns-3178.svc jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc]

    
    Sep 17 01:15:33.287: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.291: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.295: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.298: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.302: INFO: Unable to read wheezy_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
... skipping 5 lines ...
    Sep 17 01:15:33.342: INFO: Unable to read jessie_udp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178 from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.348: INFO: Unable to read jessie_udp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.354: INFO: Unable to read jessie_tcp@dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.357: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.361: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc from pod dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568: the server could not find the requested resource (get pods dns-test-0cc2c2f5-e773-4087-89d4-757908640568)
    Sep 17 01:15:33.380: INFO: Lookups using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3178 wheezy_tcp@dns-test-service.dns-3178 wheezy_udp@dns-test-service.dns-3178.svc wheezy_tcp@dns-test-service.dns-3178.svc wheezy_udp@_http._tcp.dns-test-service.dns-3178.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3178.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3178 jessie_tcp@dns-test-service.dns-3178 jessie_udp@dns-test-service.dns-3178.svc jessie_tcp@dns-test-service.dns-3178.svc jessie_udp@_http._tcp.dns-test-service.dns-3178.svc jessie_tcp@_http._tcp.dns-test-service.dns-3178.svc]

    
    Sep 17 01:15:38.372: INFO: DNS probes using dns-3178/dns-test-0cc2c2f5-e773-4087-89d4-757908640568 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:38.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-3178" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":472,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-8740" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":26,"skipped":489,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:45.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2661" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":27,"skipped":495,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating pod pod-subpath-test-projected-xkx6
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 17 01:15:33.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xkx6" in namespace "subpath-2783" to be "Succeeded or Failed"

    Sep 17 01:15:33.682: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.171313ms
    Sep 17 01:15:35.686: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 2.011505566s
    Sep 17 01:15:37.690: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 4.015368689s
    Sep 17 01:15:39.693: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 6.01825556s
    Sep 17 01:15:41.697: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 8.022080067s
    Sep 17 01:15:43.700: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 10.025378901s
    Sep 17 01:15:45.704: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 12.029260702s
    Sep 17 01:15:47.708: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 14.033171486s
    Sep 17 01:15:49.712: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 16.037208955s
    Sep 17 01:15:51.717: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 18.041884254s
    Sep 17 01:15:53.721: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Running", Reason="", readiness=true. Elapsed: 20.046046819s
    Sep 17 01:15:55.725: INFO: Pod "pod-subpath-test-projected-xkx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.05020454s
    STEP: Saw pod success
    Sep 17 01:15:55.725: INFO: Pod "pod-subpath-test-projected-xkx6" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:55.728: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-subpath-test-projected-xkx6 container test-container-subpath-projected-xkx6: <nil>
    STEP: delete the pod
    Sep 17 01:15:55.748: INFO: Waiting for pod pod-subpath-test-projected-xkx6 to disappear
    Sep 17 01:15:55.750: INFO: Pod pod-subpath-test-projected-xkx6 no longer exists
    STEP: Deleting pod pod-subpath-test-projected-xkx6
    Sep 17 01:15:55.750: INFO: Deleting pod "pod-subpath-test-projected-xkx6" in namespace "subpath-2783"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:55.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-2783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":641,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:15:55.764: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep 17 01:15:55.800: INFO: Waiting up to 5m0s for pod "pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6" in namespace "emptydir-8328" to be "Succeeded or Failed"

    Sep 17 01:15:55.803: INFO: Pod "pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636754ms
    Sep 17 01:15:57.807: INFO: Pod "pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006502908s
    STEP: Saw pod success
    Sep 17 01:15:57.807: INFO: Pod "pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6" satisfied condition "Succeeded or Failed"

    Sep 17 01:15:57.810: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:15:57.824: INFO: Waiting for pod pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6 to disappear
    Sep 17 01:15:57.827: INFO: Pod pod-cc805840-f06d-41c8-81a3-ffb8829a7bf6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:15:57.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8328" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":642,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: create the rc
    STEP: delete the rc
    STEP: wait for the rc to be deleted
    STEP: Gathering metrics
    W0917 01:11:00.008237      20 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:16:00.012: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    [AfterEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:00.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9543" for this suite.
    
    
    • [SLOW TEST:306.079 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":10,"skipped":59,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:00.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3140" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":26,"skipped":664,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:00.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1228" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":27,"skipped":676,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:02.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-6755" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":11,"skipped":75,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:02.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-43" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":12,"skipped":104,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-1431" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":13,"skipped":111,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:16:05.555: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward api env vars
    Sep 17 01:16:05.608: INFO: Waiting up to 5m0s for pod "downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d" in namespace "downward-api-248" to be "Succeeded or Failed"

    Sep 17 01:16:05.617: INFO: Pod "downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.401116ms
    Sep 17 01:16:07.621: INFO: Pod "downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012834327s
    STEP: Saw pod success
    Sep 17 01:16:07.621: INFO: Pod "downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d" satisfied condition "Succeeded or Failed"

    Sep 17 01:16:07.624: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:16:07.646: INFO: Waiting for pod downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d to disappear
    Sep 17 01:16:07.649: INFO: Pod downward-api-ddc8e7fe-8022-46c9-83c9-cc58d681f74d no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:07.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-248" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":120,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:19.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-27" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":15,"skipped":138,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:16:19.719: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-map-8163a49c-8cb6-433a-b4b1-d95bcca4a27d
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:16:19.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd" in namespace "configmap-1564" to be "Succeeded or Failed"

    Sep 17 01:16:19.763: INFO: Pod "pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402061ms
    Sep 17 01:16:21.767: INFO: Pod "pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006263684s
    STEP: Saw pod success
    Sep 17 01:16:21.767: INFO: Pod "pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd" satisfied condition "Succeeded or Failed"

    Sep 17 01:16:21.770: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:16:21.787: INFO: Waiting for pod pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd to disappear
    Sep 17 01:16:21.790: INFO: Pod pod-configmaps-685022a0-6983-4c8a-98f0-1f789f8b97fd no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:21.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1564" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":144,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:21.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7584" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":17,"skipped":170,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 17 01:16:02.509: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.512: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.524: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.527: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.531: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.535: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:02.549: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:07.554: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.559: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.563: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.567: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.580: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.585: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.590: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.593: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:07.602: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:12.555: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.559: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.563: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.567: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.579: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.584: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.588: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.596: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:12.604: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:17.554: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.558: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.561: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.565: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.576: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.580: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.583: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.587: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:17.594: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:22.553: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.557: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.561: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.565: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.575: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.577: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.581: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.583: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:22.590: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:27.554: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.557: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.561: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.564: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.573: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.576: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.578: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.581: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:27.586: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9435.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9435.svc.cluster.local jessie_udp@dns-test-service-2.dns-9435.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:32.567: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local from pod dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820: the server could not find the requested resource (get pods dns-test-3944228a-571c-415b-9993-fd736a2cd820)
    Sep 17 01:16:32.598: INFO: Lookups using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 failed for: [wheezy_tcp@dns-test-service-2.dns-9435.svc.cluster.local]

    
    Sep 17 01:16:37.599: INFO: DNS probes using dns-9435/dns-test-3944228a-571c-415b-9993-fd736a2cd820 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:37.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9435" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":28,"skipped":749,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:37.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-2241" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":29,"skipped":759,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.807 seconds]
    [k8s.io] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":626,"failed":0}

    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:16:53.008: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-map-dd210120-eeaa-412f-a29c-0b4879773db4
    STEP: Creating a pod to test consume secrets
    Sep 17 01:16:53.067: INFO: Waiting up to 5m0s for pod "pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46" in namespace "secrets-2712" to be "Succeeded or Failed"

    Sep 17 01:16:53.077: INFO: Pod "pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46": Phase="Pending", Reason="", readiness=false. Elapsed: 9.911546ms
    Sep 17 01:16:55.081: INFO: Pod "pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014002365s
    STEP: Saw pod success
    Sep 17 01:16:55.081: INFO: Pod "pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46" satisfied condition "Succeeded or Failed"

    Sep 17 01:16:55.084: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:16:55.102: INFO: Waiting for pod pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46 to disappear
    Sep 17 01:16:55.104: INFO: Pod pod-secrets-40a74a3a-eea1-4ffe-8f69-0b6eafde9b46 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2712" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":626,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:16:55.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b" in namespace "projected-4196" to be "Succeeded or Failed"

    Sep 17 01:16:55.163: INFO: Pod "downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301261ms
    Sep 17 01:16:57.167: INFO: Pod "downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007435819s
    STEP: Saw pod success
    Sep 17 01:16:57.167: INFO: Pod "downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b" satisfied condition "Succeeded or Failed"

    Sep 17 01:16:57.170: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:16:57.187: INFO: Waiting for pod downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b to disappear
    Sep 17 01:16:57.190: INFO: Pod downwardapi-volume-3493781b-4bec-49d6-8280-b4d87b19b13b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:57.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4196" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":629,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:16:57.259: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test override arguments
    Sep 17 01:16:57.299: INFO: Waiting up to 5m0s for pod "client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee" in namespace "containers-8719" to be "Succeeded or Failed"

    Sep 17 01:16:57.304: INFO: Pod "client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700774ms
    Sep 17 01:16:59.308: INFO: Pod "client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007826754s
    STEP: Saw pod success
    Sep 17 01:16:59.308: INFO: Pod "client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee" satisfied condition "Succeeded or Failed"

    Sep 17 01:16:59.311: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:16:59.330: INFO: Waiting for pod client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee to disappear
    Sep 17 01:16:59.334: INFO: Pod client-containers-22ee6d44-79fc-4d3f-9c3a-e2de1f48d1ee no longer exists
    [AfterEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:16:59.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-8719" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":667,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:02.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-996" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":780,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep 17 01:17:05.539: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-6437  a82cc6e9-c9d2-4ae0-92cc-965008c5c068 9155 3 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment faea3dac-9b79-47f2-9032-6354147dc38f 0xc001c0d697 0xc001c0d698}] []  [{kube-controller-manager Update apps/v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"faea3dac-9b79-47f2-9032-6354147dc38f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c0d718 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep 17 01:17:05.539: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep 17 01:17:05.539: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7  deployment-6437  6d1fe72d-05b9-4b75-b2d7-d1deef887837 9153 3 2022-09-17 01:16:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment faea3dac-9b79-47f2-9032-6354147dc38f 0xc001c0d777 0xc001c0d778}] []  [{kube-controller-manager Update apps/v1 2022-09-17 01:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"faea3dac-9b79-47f2-9032-6354147dc38f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c0d7e8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep 17 01:17:05.574: INFO: Pod "webserver-deployment-795d758f88-8q288" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8q288 webserver-deployment-795d758f88- deployment-6437  d18f0d29-bf1f-4cc5-8cc8-ac10426921e1 9133 0 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc001c0dc30 0xc001c0dc31}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.64,StartTime:2022-09-17 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 17 01:17:05.574: INFO: Pod "webserver-deployment-795d758f88-d87cb" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-d87cb webserver-deployment-795d758f88- deployment-6437  ec7d81c3-5333-4589-ab7e-3053645207d0 9190 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc001c0ddf0 0xc001c0ddf1}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.574: INFO: Pod "webserver-deployment-795d758f88-dp6m8" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-dp6m8 webserver-deployment-795d758f88- deployment-6437  cbfa4a6a-7e03-4d29-a0eb-de50d8544f74 9140 0 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc001c0df20 0xc001c0df21}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-worker-08uw3p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.48,StartTime:2022-09-17 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 17 01:17:05.574: INFO: Pod "webserver-deployment-795d758f88-f7kvk" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-f7kvk webserver-deployment-795d758f88- deployment-6437  50095f78-e98c-40af-834c-72535f32ea99 9148 0 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc0033800e0 0xc0033800e1}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.44,StartTime:2022-09-17 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 17 01:17:05.575: INFO: Pod "webserver-deployment-795d758f88-gpjsn" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gpjsn webserver-deployment-795d758f88- deployment-6437  e816b426-bf38-4b69-965f-a1b316f5d026 9193 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc0033802a0 0xc0033802a1}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-worker-08uw3p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2022-09-17 01:17:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.575: INFO: Pod "webserver-deployment-795d758f88-jbgqd" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-jbgqd webserver-deployment-795d758f88- deployment-6437  3f89da5b-f51f-46c7-8dc7-f52d0019589d 9196 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380430 0xc003380431}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.575: INFO: Pod "webserver-deployment-795d758f88-kmlp9" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-kmlp9 webserver-deployment-795d758f88- deployment-6437  5a2b81cb-6bee-4ce6-8fcd-b8046e7c7045 9192 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380547 0xc003380548}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.575: INFO: Pod "webserver-deployment-795d758f88-ktmk4" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-ktmk4 webserver-deployment-795d758f88- deployment-6437  fd098785-d57d-4769-b575-f1664136c489 9198 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380657 0xc003380658}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.576: INFO: Pod "webserver-deployment-795d758f88-lhfpl" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-lhfpl webserver-deployment-795d758f88- deployment-6437  8dc54141-fc93-4f5a-b12d-9413e5f2cec5 9144 0 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380780 0xc003380781}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.43,StartTime:2022-09-17 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 17 01:17:05.576: INFO: Pod "webserver-deployment-795d758f88-lt5h6" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-lt5h6 webserver-deployment-795d758f88- deployment-6437  a2d9170a-0536-4d93-bff6-0338a58fd3a1 9188 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380950 0xc003380951}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.576: INFO: Pod "webserver-deployment-795d758f88-ngbk4" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-ngbk4 webserver-deployment-795d758f88- deployment-6437  ad5c71c5-1d98-4109-a61e-a10d8ea5d8de 9151 0 2022-09-17 01:17:03 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380aa0 0xc003380aa1}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.20,StartTime:2022-09-17 01:17:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 17 01:17:05.576: INFO: Pod "webserver-deployment-795d758f88-p7gnx" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-p7gnx webserver-deployment-795d758f88- deployment-6437  205252e3-9022-411b-bd34-f2a86c29d6b1 9191 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a82cc6e9-c9d2-4ae0-92cc-965008c5c068 0xc003380c80 0xc003380c81}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a82cc6e9-c9d2-4ae0-92cc-965008c5c068\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.576: INFO: Pod "webserver-deployment-dd94f59b7-2ncpf" is not available:
    &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2ncpf webserver-deployment-dd94f59b7- deployment-6437  fd629442-57e9-4eda-bc82-63dc6a7628a7 9195 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6d1fe72d-05b9-4b75-b2d7-d1deef887837 0xc003380da7 0xc003380da8}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d1fe72d-05b9-4b75-b2d7-d1deef887837\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 17 01:17:05.577: INFO: Pod "webserver-deployment-dd94f59b7-42htm" is not available:
    &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-42htm webserver-deployment-dd94f59b7- deployment-6437  8ee8aaa6-ccb3-4607-9dae-da60c43dfd25 9194 0 2022-09-17 01:17:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6d1fe72d-05b9-4b75-b2d7-d1deef887837 0xc003380ea7 0xc003380ea8}] []  [{kube-controller-manager Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d1fe72d-05b9-4b75-b2d7-d1deef887837\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-17 01:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnvdw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-17 01:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2022-09-17 01:17:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 73 lines ...
    STEP: Destroying namespace "services-367" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":31,"skipped":807,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Listing all of the created validation webhooks
    Sep 17 01:16:35.831: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:16:45.953: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:16:56.057: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:17:06.158: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:17:16.199: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:17:16.199: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002ee1f0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      listing validating webhooks should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:17:16.199: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002ee1f0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 423 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:24.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-6865" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":32,"skipped":822,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-3024-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":33,"skipped":823,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:17:30.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239" in namespace "downward-api-413" to be "Succeeded or Failed"

    Sep 17 01:17:30.420: INFO: Pod "downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.515019ms
    Sep 17 01:17:32.424: INFO: Pod "downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007097823s
    STEP: Saw pod success
    Sep 17 01:17:32.424: INFO: Pod "downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239" satisfied condition "Succeeded or Failed"

    Sep 17 01:17:32.435: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:17:32.479: INFO: Waiting for pod downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239 to disappear
    Sep 17 01:17:32.482: INFO: Pod downwardapi-volume-53894f15-8cdd-4cfe-8a7b-922419785239 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:32.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-413" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":857,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:32.562: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 17 01:17:32.619: INFO: Waiting up to 5m0s for pod "pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10" in namespace "emptydir-7175" to be "Succeeded or Failed"

    Sep 17 01:17:32.633: INFO: Pod "pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014707ms
    Sep 17 01:17:34.643: INFO: Pod "pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023777328s
    STEP: Saw pod success
    Sep 17 01:17:34.643: INFO: Pod "pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10" satisfied condition "Succeeded or Failed"

    Sep 17 01:17:34.646: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:17:34.677: INFO: Waiting for pod pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10 to disappear
    Sep 17 01:17:34.680: INFO: Pod pod-cbbbdea3-fe90-4ea8-94dd-8adf65184d10 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":888,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":34,"skipped":674,"failed":0}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:05.661: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    Sep 17 01:17:09.825: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.828: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.852: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.858: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.862: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:09.882: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:14.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.898: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.922: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.933: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:14.957: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:19.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.898: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.962: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:19.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:20.004: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:24.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.926: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:24.958: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:29.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.936: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.943: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.956: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:29.994: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:34.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.898: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.936: INFO: Unable to read jessie_udp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.949: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:34.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local from pod dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f: the server could not find the requested resource (get pods dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f)
    Sep 17 01:17:35.001: INFO: Lookups using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f failed for: [wheezy_udp@dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@dns-test-service.dns-9031.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_udp@dns-test-service.dns-9031.svc.cluster.local jessie_tcp@dns-test-service.dns-9031.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9031.svc.cluster.local]

    
    Sep 17 01:17:39.971: INFO: DNS probes using dns-9031/dns-test-7c5c884e-c2aa-4d51-b10f-14857812cc3f succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:40.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9031" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":35,"skipped":674,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:40.110: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 17 01:17:40.180: INFO: Waiting up to 5m0s for pod "pod-d431299c-c35e-42ed-a330-8906fbf5512d" in namespace "emptydir-3522" to be "Succeeded or Failed"

    Sep 17 01:17:40.191: INFO: Pod "pod-d431299c-c35e-42ed-a330-8906fbf5512d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.927255ms
    Sep 17 01:17:42.196: INFO: Pod "pod-d431299c-c35e-42ed-a330-8906fbf5512d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016171883s
    STEP: Saw pod success
    Sep 17 01:17:42.196: INFO: Pod "pod-d431299c-c35e-42ed-a330-8906fbf5512d" satisfied condition "Succeeded or Failed"

    Sep 17 01:17:42.199: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-d431299c-c35e-42ed-a330-8906fbf5512d container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:17:42.218: INFO: Waiting for pod pod-d431299c-c35e-42ed-a330-8906fbf5512d to disappear
    Sep 17 01:17:42.221: INFO: Pod pod-d431299c-c35e-42ed-a330-8906fbf5512d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:17:42.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3522" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":678,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:42.319: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep 17 01:17:42.357: INFO: Waiting up to 5m0s for pod "pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1" in namespace "emptydir-6789" to be "Succeeded or Failed"

    Sep 17 01:17:42.360: INFO: Pod "pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503571ms
    Sep 17 01:17:44.364: INFO: Pod "pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006650701s
    STEP: Saw pod success
    Sep 17 01:17:44.364: INFO: Pod "pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1" satisfied condition "Succeeded or Failed"

    Sep 17 01:17:44.367: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:17:44.382: INFO: Waiting for pod pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1 to disappear
    Sep 17 01:17:44.385: INFO: Pod pod-ec9433f3-98bf-4f80-9d62-0cdde0c835d1 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 38 lines ...
    STEP: Destroying namespace "webhook-1232-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":36,"skipped":908,"failed":0}

    
    SSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":735,"failed":0}

    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:44.396: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:18:02.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-6792" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":735,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":17,"skipped":180,"failed":2,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:17:16.347: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Listing all of the created validation webhooks
    Sep 17 01:17:30.002: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:17:40.139: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:17:50.227: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:18:00.324: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:18:10.347: INFO: Waiting for webhook configuration to be ready...
    Sep 17 01:18:10.348: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002ee1f0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      listing validating webhooks should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:18:10.348: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002ee1f0>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:605
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":17,"skipped":180,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:18:10.418: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-3595-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":18,"skipped":180,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:18:25.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-1610" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":37,"skipped":915,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:18:26.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-3609" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":755,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: creating a second pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 17 01:18:18.739: INFO: File wheezy_udp@dns-test-service-3.dns-8219.svc.cluster.local from pod  dns-8219/dns-test-c3a5451c-32e0-4bb0-9af0-d5dcebb6bf26 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 17 01:18:18.744: INFO: Lookups using dns-8219/dns-test-c3a5451c-32e0-4bb0-9af0-d5dcebb6bf26 failed for: [wheezy_udp@dns-test-service-3.dns-8219.svc.cluster.local]

    
    Sep 17 01:18:23.752: INFO: File jessie_udp@dns-test-service-3.dns-8219.svc.cluster.local from pod  dns-8219/dns-test-c3a5451c-32e0-4bb0-9af0-d5dcebb6bf26 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 17 01:18:23.752: INFO: Lookups using dns-8219/dns-test-c3a5451c-32e0-4bb0-9af0-d5dcebb6bf26 failed for: [jessie_udp@dns-test-service-3.dns-8219.svc.cluster.local]

    
    Sep 17 01:18:28.756: INFO: DNS probes using dns-test-c3a5451c-32e0-4bb0-9af0-d5dcebb6bf26 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8219.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8219.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:18:32.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8219" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":19,"skipped":190,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:18:42.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9985" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":40,"skipped":789,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 45 lines ...
    STEP: Destroying namespace "services-2985" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":199,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:18:50.009: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-9d9e602d-d8ca-4c56-9b28-957e3b3af75b
    STEP: Creating a pod to test consume secrets
    Sep 17 01:18:50.078: INFO: Waiting up to 5m0s for pod "pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6" in namespace "secrets-7723" to be "Succeeded or Failed"

    Sep 17 01:18:50.082: INFO: Pod "pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298263ms
    Sep 17 01:18:52.086: INFO: Pod "pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008086101s
    STEP: Saw pod success
    Sep 17 01:18:52.086: INFO: Pod "pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6" satisfied condition "Succeeded or Failed"

    Sep 17 01:18:52.088: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:18:52.103: INFO: Waiting for pod pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6 to disappear
    Sep 17 01:18:52.105: INFO: Pod pod-secrets-2bb4af13-2162-47c1-8acd-431447a68af6 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 37 lines ...
    STEP: Destroying namespace "webhook-5772-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":41,"skipped":818,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 158 lines ...
    Sep 17 01:19:00.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1233 create -f -'
    Sep 17 01:19:01.045: INFO: stderr: ""
    Sep 17 01:19:01.045: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep 17 01:19:01.045: INFO: Waiting for all frontend pods to be Running.
    Sep 17 01:19:06.095: INFO: Waiting for frontend to serve content.
    Sep 17 01:19:11.109: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 

    Sep 17 01:19:16.119: INFO: Trying to add a new entry to the guestbook.
    Sep 17 01:19:16.128: INFO: Verifying that added entry can be retrieved.
    STEP: using delete to clean up resources
    Sep 17 01:19:16.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1233 delete --grace-period=0 --force -f -'
    Sep 17 01:19:16.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
    Sep 17 01:19:16.274: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:16.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1233" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":42,"skipped":828,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating service multi-endpoint-test in namespace services-1161
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1161 to expose endpoints map[]
    Sep 17 01:19:17.070: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep 17 01:19:18.081: INFO: successfully validated that service multi-endpoint-test in namespace services-1161 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-1161
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1161 to expose endpoints map[pod1:[100]]
    Sep 17 01:19:19.105: INFO: successfully validated that service multi-endpoint-test in namespace services-1161 exposes endpoints map[pod1:[100]]
    STEP: Creating pod pod2 in namespace services-1161
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1161 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 10 lines ...
    STEP: Destroying namespace "services-1161" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":43,"skipped":851,"failed":0}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:19:21.256: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-1739751f-aefe-49ea-bd23-8f8a87a3cd02
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:19:21.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0" in namespace "configmap-1918" to be "Succeeded or Failed"

    Sep 17 01:19:21.333: INFO: Pod "pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.368432ms
    Sep 17 01:19:23.337: INFO: Pod "pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01159284s
    STEP: Saw pod success
    Sep 17 01:19:23.337: INFO: Pod "pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0" satisfied condition "Succeeded or Failed"

    Sep 17 01:19:23.340: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:19:23.360: INFO: Waiting for pod pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0 to disappear
    Sep 17 01:19:23.363: INFO: Pod pod-configmaps-7aced2e2-5b55-46e6-88f4-9c2feac1a4c0 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":851,"failed":0}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:19:23.375: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-85f5d1bd-6707-4b80-9147-010e8d6b03b7
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:19:23.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1" in namespace "configmap-4175" to be "Succeeded or Failed"

    Sep 17 01:19:23.418: INFO: Pod "pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82133ms
    Sep 17 01:19:25.422: INFO: Pod "pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007233209s
    STEP: Saw pod success
    Sep 17 01:19:25.422: INFO: Pod "pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1" satisfied condition "Succeeded or Failed"

    Sep 17 01:19:25.425: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:19:25.447: INFO: Waiting for pod pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1 to disappear
    Sep 17 01:19:25.450: INFO: Pod pod-configmaps-7d37fb5e-5e04-41c3-9d22-8a4d18c133c1 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:25.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":851,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:29.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4713" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":46,"skipped":858,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:19:29.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8" in namespace "projected-7328" to be "Succeeded or Failed"

    Sep 17 01:19:29.642: INFO: Pod "downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.87415ms
    Sep 17 01:19:31.646: INFO: Pod "downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011525482s
    STEP: Saw pod success
    Sep 17 01:19:31.646: INFO: Pod "downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8" satisfied condition "Succeeded or Failed"

    Sep 17 01:19:31.649: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:19:31.675: INFO: Waiting for pod downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8 to disappear
    Sep 17 01:19:31.678: INFO: Pod downwardapi-volume-70ef48a5-32cc-43cd-ab52-a2fd527e3cd8 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:31.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7328" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":872,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":271,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:18:52.115: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:48.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-8322" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":271,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:48.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-9133" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":23,"skipped":295,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:49.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":905,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:19:50.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-3876" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":306,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating pod pod-subpath-test-secret-zspp
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 17 01:19:49.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zspp" in namespace "subpath-3216" to be "Succeeded or Failed"

    Sep 17 01:19:49.901: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215587ms
    Sep 17 01:19:51.905: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 2.008919729s
    Sep 17 01:19:53.911: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 4.014400054s
    Sep 17 01:19:55.915: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 6.018520527s
    Sep 17 01:19:57.919: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 8.022505953s
    Sep 17 01:19:59.923: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 10.026439582s
    Sep 17 01:20:01.928: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 12.031189137s
    Sep 17 01:20:03.933: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 14.036157089s
    Sep 17 01:20:05.937: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 16.040404897s
    Sep 17 01:20:07.941: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 18.04421405s
    Sep 17 01:20:09.945: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Running", Reason="", readiness=true. Elapsed: 20.048694789s
    Sep 17 01:20:11.949: INFO: Pod "pod-subpath-test-secret-zspp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.052777847s
    STEP: Saw pod success
    Sep 17 01:20:11.949: INFO: Pod "pod-subpath-test-secret-zspp" satisfied condition "Succeeded or Failed"

    Sep 17 01:20:11.952: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-subpath-test-secret-zspp container test-container-subpath-secret-zspp: <nil>
    STEP: delete the pod
    Sep 17 01:20:11.972: INFO: Waiting for pod pod-subpath-test-secret-zspp to disappear
    Sep 17 01:20:11.975: INFO: Pod pod-subpath-test-secret-zspp no longer exists
    STEP: Deleting pod pod-subpath-test-secret-zspp
    Sep 17 01:20:11.975: INFO: Deleting pod "pod-subpath-test-secret-zspp" in namespace "subpath-3216"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:11.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-3216" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":49,"skipped":927,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:21.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-9350" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":50,"skipped":940,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-5834-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":51,"skipped":985,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:25.243: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-3075312d-96a9-48e5-863c-848838077757
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:20:25.302: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0" in namespace "projected-7087" to be "Succeeded or Failed"

    Sep 17 01:20:25.307: INFO: Pod "pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277579ms
    Sep 17 01:20:27.310: INFO: Pod "pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0": Phase="Running", Reason="", readiness=true. Elapsed: 2.008003076s
    Sep 17 01:20:29.316: INFO: Pod "pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013901626s
    STEP: Saw pod success
    Sep 17 01:20:29.316: INFO: Pod "pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0" satisfied condition "Succeeded or Failed"

    Sep 17 01:20:29.325: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:20:29.346: INFO: Waiting for pod pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0 to disappear
    Sep 17 01:20:29.350: INFO: Pod pod-projected-configmaps-0b73b30a-9bfe-4f57-a397-74a42657d4e0 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:29.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7087" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":996,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:42.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9150" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":53,"skipped":997,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:42.046: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-map-7ee236a3-2975-4f82-ad69-24b50bca551f
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:20:42.089: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a" in namespace "projected-8632" to be "Succeeded or Failed"

    Sep 17 01:20:42.094: INFO: Pod "pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637944ms
    Sep 17 01:20:44.098: INFO: Pod "pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009111859s
    STEP: Saw pod success
    Sep 17 01:20:44.098: INFO: Pod "pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a" satisfied condition "Succeeded or Failed"

    Sep 17 01:20:44.101: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:20:44.118: INFO: Waiting for pod pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a to disappear
    Sep 17 01:20:44.120: INFO: Pod pod-projected-configmaps-3b8d9625-0326-4e9a-98b6-2b0ac482cb2a no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:44.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8632" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1013,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller affinity-nodeport-transition in namespace services-5424
    I0917 01:18:25.881472      16 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5424, replica count: 3
    I0917 01:18:28.932046      16 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep 17 01:18:28.945: INFO: Creating new exec pod
    Sep 17 01:18:31.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:34.197: INFO: rc: 1
    Sep 17 01:18:34.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:35.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:37.422: INFO: rc: 1
    Sep 17 01:18:37.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:38.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:40.418: INFO: rc: 1
    Sep 17 01:18:40.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:41.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:43.381: INFO: rc: 1
    Sep 17 01:18:43.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:44.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:46.387: INFO: rc: 1
    Sep 17 01:18:46.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:47.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:49.401: INFO: rc: 1
    Sep 17 01:18:49.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:50.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:52.373: INFO: rc: 1
    Sep 17 01:18:52.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:53.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:55.377: INFO: rc: 1
    Sep 17 01:18:55.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:56.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:18:58.361: INFO: rc: 1
    Sep 17 01:18:58.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:18:59.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:01.506: INFO: rc: 1
    Sep 17 01:19:01.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:02.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:04.367: INFO: rc: 1
    Sep 17 01:19:04.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:05.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:07.378: INFO: rc: 1
    Sep 17 01:19:07.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:08.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:10.388: INFO: rc: 1
    Sep 17 01:19:10.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:11.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:13.367: INFO: rc: 1
    Sep 17 01:19:13.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:14.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:16.371: INFO: rc: 1
    Sep 17 01:19:16.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:17.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:19.445: INFO: rc: 1
    Sep 17 01:19:19.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:20.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:22.407: INFO: rc: 1
    Sep 17 01:19:22.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:23.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:25.379: INFO: rc: 1
    Sep 17 01:19:25.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:26.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:28.405: INFO: rc: 1
    Sep 17 01:19:28.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:29.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:31.429: INFO: rc: 1
    Sep 17 01:19:31.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:32.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:34.386: INFO: rc: 1
    Sep 17 01:19:34.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:35.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:37.397: INFO: rc: 1
    Sep 17 01:19:37.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:38.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:40.388: INFO: rc: 1
    Sep 17 01:19:40.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:41.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:43.379: INFO: rc: 1
    Sep 17 01:19:43.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:44.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:46.387: INFO: rc: 1
    Sep 17 01:19:46.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:47.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:49.389: INFO: rc: 1
    Sep 17 01:19:49.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:50.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:52.377: INFO: rc: 1
    Sep 17 01:19:52.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:53.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:55.381: INFO: rc: 1
    Sep 17 01:19:55.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:56.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:19:58.370: INFO: rc: 1
    Sep 17 01:19:58.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:19:59.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:01.374: INFO: rc: 1
    Sep 17 01:20:01.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:02.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:04.381: INFO: rc: 1
    Sep 17 01:20:04.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:05.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:07.381: INFO: rc: 1
    Sep 17 01:20:07.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:08.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:10.397: INFO: rc: 1
    Sep 17 01:20:10.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:11.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:13.414: INFO: rc: 1
    Sep 17 01:20:13.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:14.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:16.376: INFO: rc: 1
    Sep 17 01:20:16.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:17.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:19.394: INFO: rc: 1
    Sep 17 01:20:19.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:20.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:22.432: INFO: rc: 1
    Sep 17 01:20:22.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:23.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:25.390: INFO: rc: 1
    Sep 17 01:20:25.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:26.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:28.534: INFO: rc: 1
    Sep 17 01:20:28.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:29.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:31.382: INFO: rc: 1
    Sep 17 01:20:31.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:32.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:34.388: INFO: rc: 1
    Sep 17 01:20:34.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:34.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
    Sep 17 01:20:36.596: INFO: rc: 1
    Sep 17 01:20:36.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5424 exec execpod-affinityslkrv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -zv -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 17 01:20:36.597: FAIL: Unexpected error:

        <*errors.errorString | 0xc00094aeb0>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [143.995 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
      should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:20:36.597: Unexpected error:

          <*errors.errorString | 0xc00094aeb0>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
      occurred
    
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:50.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7781" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":55,"skipped":1034,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:51.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-3780" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":56,"skipped":1037,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:20:51.627: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb" in namespace "downward-api-8478" to be "Succeeded or Failed"

    Sep 17 01:20:51.630: INFO: Pod "downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836204ms
    Sep 17 01:20:53.634: INFO: Pod "downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007175575s
    STEP: Saw pod success
    Sep 17 01:20:53.634: INFO: Pod "downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb" satisfied condition "Succeeded or Failed"

    Sep 17 01:20:53.638: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:20:53.672: INFO: Waiting for pod downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb to disappear
    Sep 17 01:20:53.678: INFO: Pod downwardapi-volume-0eadc3f3-7958-463b-861e-666afc1cb7fb no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:53.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8478" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1044,"failed":0}

    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:53.689: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward api env vars
    Sep 17 01:20:53.730: INFO: Waiting up to 5m0s for pod "downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368" in namespace "downward-api-6193" to be "Succeeded or Failed"

    Sep 17 01:20:53.733: INFO: Pod "downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.671091ms
    Sep 17 01:20:55.737: INFO: Pod "downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006842456s
    STEP: Saw pod success
    Sep 17 01:20:55.737: INFO: Pod "downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368" satisfied condition "Succeeded or Failed"

    Sep 17 01:20:55.740: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:20:55.755: INFO: Waiting for pod downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368 to disappear
    Sep 17 01:20:55.757: INFO: Pod downward-api-492eaa78-c0d3-4e65-bdb5-768ac6f61368 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:55.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6193" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1044,"failed":0}

    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:55.769: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:20:58.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9632" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1044,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:58.364: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test substitution in volume subpath
    Sep 17 01:20:58.398: INFO: Waiting up to 5m0s for pod "var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b" in namespace "var-expansion-6478" to be "Succeeded or Failed"

    Sep 17 01:20:58.402: INFO: Pod "var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394894ms
    Sep 17 01:21:00.405: INFO: Pod "var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007197163s
    STEP: Saw pod success
    Sep 17 01:21:00.405: INFO: Pod "var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b" satisfied condition "Succeeded or Failed"

    Sep 17 01:21:00.408: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:21:00.432: INFO: Waiting for pod var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b to disappear
    Sep 17 01:21:00.435: INFO: Pod var-expansion-2f502adb-8c1f-404d-afa8-ff4a295fb24b no longer exists
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:00.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6478" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":60,"skipped":1057,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:30.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-3678" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":25,"skipped":331,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:30.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-4152" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":26,"skipped":344,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:33.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-71" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":27,"skipped":412,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:33.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-6161" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":28,"skipped":414,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:21:33.821: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-map-97f1be14-e4c8-41c2-9f60-06da0d999fcb
    STEP: Creating a pod to test consume secrets
    Sep 17 01:21:33.862: INFO: Waiting up to 5m0s for pod "pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb" in namespace "secrets-3821" to be "Succeeded or Failed"

    Sep 17 01:21:33.865: INFO: Pod "pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661732ms
    Sep 17 01:21:35.868: INFO: Pod "pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005753525s
    STEP: Saw pod success
    Sep 17 01:21:35.868: INFO: Pod "pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb" satisfied condition "Succeeded or Failed"

    Sep 17 01:21:35.871: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:21:35.888: INFO: Waiting for pod pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb to disappear
    Sep 17 01:21:35.890: INFO: Pod pod-secrets-2321d4d7-0fd5-4bfb-bd3a-3e3863a865eb no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:21:35.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3821" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":433,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 60 lines ...
    STEP: Destroying namespace "services-9101" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":61,"skipped":1066,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:04.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1077,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:22:04.303: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name projected-secret-test-469ac9ac-4427-472f-bdd2-3e0b32fc69fb
    STEP: Creating a pod to test consume secrets
    Sep 17 01:22:04.349: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746" in namespace "projected-9159" to be "Succeeded or Failed"

    Sep 17 01:22:04.353: INFO: Pod "pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132237ms
    Sep 17 01:22:06.359: INFO: Pod "pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010207885s
    STEP: Saw pod success
    Sep 17 01:22:06.359: INFO: Pod "pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:06.362: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:22:06.380: INFO: Waiting for pod pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746 to disappear
    Sep 17 01:22:06.387: INFO: Pod pod-projected-secrets-a9033440-586a-4fb9-a6c3-05d20a544746 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9159" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1085,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:13.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8461" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1095,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:22:13.046: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-c8aca21b-6b3f-4bc3-be27-f7008b4774bd
    STEP: Creating a pod to test consume secrets
    Sep 17 01:22:13.090: INFO: Waiting up to 5m0s for pod "pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a" in namespace "secrets-3936" to be "Succeeded or Failed"

    Sep 17 01:22:13.093: INFO: Pod "pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416786ms
    Sep 17 01:22:15.097: INFO: Pod "pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00712503s
    STEP: Saw pod success
    Sep 17 01:22:15.097: INFO: Pod "pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:15.100: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:22:15.115: INFO: Waiting for pod pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a to disappear
    Sep 17 01:22:15.118: INFO: Pod pod-secrets-4ea3e3d1-fd8b-4003-bd27-5b516bb7236a no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:15.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3936" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1116,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:17.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-3924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1172,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:22:17.288: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-map-fa81daab-c27b-4acf-9f84-f695ee6b13d5
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:22:17.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf" in namespace "configmap-3005" to be "Succeeded or Failed"

    Sep 17 01:22:17.337: INFO: Pod "pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536759ms
    Sep 17 01:22:19.342: INFO: Pod "pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008382259s
    STEP: Saw pod success
    Sep 17 01:22:19.342: INFO: Pod "pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:19.345: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:22:19.364: INFO: Waiting for pod pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf to disappear
    Sep 17 01:22:19.370: INFO: Pod pod-configmaps-74ff91a1-2c77-4040-be06-558a806148cf no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:19.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3005" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1188,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:21.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5357" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":68,"skipped":1201,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:22:21.504: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-0f6a9b1f-121b-4111-9547-78bf6189063e
    STEP: Creating a pod to test consume secrets
    Sep 17 01:22:21.556: INFO: Waiting up to 5m0s for pod "pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5" in namespace "secrets-2615" to be "Succeeded or Failed"

    Sep 17 01:22:21.562: INFO: Pod "pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.724194ms
    Sep 17 01:22:23.566: INFO: Pod "pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007828072s
    STEP: Saw pod success
    Sep 17 01:22:23.566: INFO: Pod "pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:23.569: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:22:23.588: INFO: Waiting for pod pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5 to disappear
    Sep 17 01:22:23.591: INFO: Pod pod-secrets-12d71bae-fdf2-4993-910d-c6fa4f5647d5 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:23.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2615" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1202,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:22:23.621: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-d3fa0af2-86d7-47a5-8e0e-8c1ba238d813
    STEP: Creating a pod to test consume secrets
    Sep 17 01:22:23.698: INFO: Waiting up to 5m0s for pod "pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0" in namespace "secrets-9074" to be "Succeeded or Failed"

    Sep 17 01:22:23.702: INFO: Pod "pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989178ms
    Sep 17 01:22:25.707: INFO: Pod "pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008702141s
    STEP: Saw pod success
    Sep 17 01:22:25.707: INFO: Pod "pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:25.710: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:22:25.727: INFO: Waiting for pod pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0 to disappear
    Sep 17 01:22:25.731: INFO: Pod pod-secrets-e019ffc8-b840-4258-95e5-1d47a1ddb8e0 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:25.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9074" for this suite.
    STEP: Destroying namespace "secret-namespace-5995" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1207,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:32.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-8130" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":71,"skipped":1234,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating pod pod-subpath-test-configmap-859f
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 17 01:22:32.961: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-859f" in namespace "subpath-2293" to be "Succeeded or Failed"

    Sep 17 01:22:32.964: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466736ms
    Sep 17 01:22:34.968: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 2.006737248s
    Sep 17 01:22:36.972: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 4.010858625s
    Sep 17 01:22:38.978: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 6.016052976s
    Sep 17 01:22:40.981: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 8.019942696s
    Sep 17 01:22:42.986: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 10.024245755s
    Sep 17 01:22:44.990: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 12.02832623s
    Sep 17 01:22:46.994: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 14.032485448s
    Sep 17 01:22:49.000: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 16.038457675s
    Sep 17 01:22:51.004: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 18.042636031s
    Sep 17 01:22:53.007: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Running", Reason="", readiness=true. Elapsed: 20.045819302s
    Sep 17 01:22:55.011: INFO: Pod "pod-subpath-test-configmap-859f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.04983131s
    STEP: Saw pod success
    Sep 17 01:22:55.011: INFO: Pod "pod-subpath-test-configmap-859f" satisfied condition "Succeeded or Failed"

    Sep 17 01:22:55.014: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod pod-subpath-test-configmap-859f container test-container-subpath-configmap-859f: <nil>
    STEP: delete the pod
    Sep 17 01:22:55.035: INFO: Waiting for pod pod-subpath-test-configmap-859f to disappear
    Sep 17 01:22:55.038: INFO: Pod pod-subpath-test-configmap-859f no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-859f
    Sep 17 01:22:55.038: INFO: Deleting pod "pod-subpath-test-configmap-859f" in namespace "subpath-2293"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:22:55.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-2293" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":72,"skipped":1269,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep 17 01:22:57.618: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3"
    Sep 17 01:22:57.618: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3" in namespace "pods-6812" to be "terminated due to deadline exceeded"
    Sep 17 01:22:57.621: INFO: Pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3": Phase="Running", Reason="", readiness=true. Elapsed: 3.106848ms
    Sep 17 01:22:59.626: INFO: Pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007875752s
    Sep 17 01:23:01.630: INFO: Pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.0121199s

    Sep 17 01:23:01.630: INFO: Pod "pod-update-activedeadlineseconds-0a8b2e13-513d-4d8e-b93f-666c3d5f0ed3" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:01.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6812" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1273,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:03.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4555" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1280,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:10.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-7790" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":75,"skipped":1285,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:16.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-8526" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":76,"skipped":1349,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:30.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2487" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":77,"skipped":1395,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:32.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9009" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":78,"skipped":1421,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:32.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":79,"skipped":1434,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:23:32.310: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward api env vars
    Sep 17 01:23:32.345: INFO: Waiting up to 5m0s for pod "downward-api-71394aba-fdc5-4163-bb2f-f725160bced5" in namespace "downward-api-7458" to be "Succeeded or Failed"

    Sep 17 01:23:32.348: INFO: Pod "downward-api-71394aba-fdc5-4163-bb2f-f725160bced5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8469ms
    Sep 17 01:23:34.353: INFO: Pod "downward-api-71394aba-fdc5-4163-bb2f-f725160bced5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008222129s
    STEP: Saw pod success
    Sep 17 01:23:34.353: INFO: Pod "downward-api-71394aba-fdc5-4163-bb2f-f725160bced5" satisfied condition "Succeeded or Failed"

    Sep 17 01:23:34.357: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downward-api-71394aba-fdc5-4163-bb2f-f725160bced5 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:23:34.376: INFO: Waiting for pod downward-api-71394aba-fdc5-4163-bb2f-f725160bced5 to disappear
    Sep 17 01:23:34.379: INFO: Pod downward-api-71394aba-fdc5-4163-bb2f-f725160bced5 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:34.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7458" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1452,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 100 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:23:38.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2012" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":81,"skipped":1455,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":921,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:20:49.834: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
    Sep 17 01:24:57.524: INFO: stdout: "\naffinity-nodeport-transition-stt28\naffinity-nodeport-transition-stt28\naffinity-nodeport-transition-stt28\naffinity-nodeport-transition-stt28\n"
    Sep 17 01:24:57.524: INFO: Received response from host: affinity-nodeport-transition-stt28
    Sep 17 01:24:57.524: INFO: Received response from host: affinity-nodeport-transition-stt28
    Sep 17 01:24:57.524: INFO: Received response from host: affinity-nodeport-transition-stt28
    Sep 17 01:24:57.524: INFO: Received response from host: affinity-nodeport-transition-stt28
    Sep 17 01:24:57.524: INFO: [affinity-nodeport-transition-stt28 affinity-nodeport-transition-6crjn affinity-nodeport-transition-6crjn affinity-nodeport-transition-6crjn affinity-nodeport-transition-stt28 affinity-nodeport-transition-stt28 affinity-nodeport-transition-stt28 affinity-nodeport-transition-stt28]
    Sep 17 01:24:57.524: FAIL: Connection timed out or not enough responses.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc003feeb00, 0xc001636c00, 0xc0043f9680, 0xa, 0x78c3, 0x0, 0xc001636c00)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db
    k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000effce0, 0x56112e0, 0xc003feeb00, 0xc000e6f900, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b
... skipping 69 lines ...
    Sep 17 01:24:53.437: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30448/\n"
    Sep 17 01:24:53.438: INFO: stdout: "\n"
    Sep 17 01:24:53.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3228 exec execpod-affinityd8gwm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30448/ ; done'
    Sep 17 01:25:43.648: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30448/\n"
    Sep 17 01:25:43.648: INFO: stdout: "\n"
    Sep 17 01:25:43.648: INFO: []
    Sep 17 01:25:43.649: FAIL: Connection timed out or not enough responses.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc002db1340, 0xc0030c6000, 0xc0030808a0, 0xa, 0x76f0, 0x1, 0xc0030c6000)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db
    k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001159340, 0x56112e0, 0xc002db1340, 0xc000f48780, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3447 +0x92c
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:25:43.649: Connection timed out or not enough responses.
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":593,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:25:55.876: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
    STEP: Destroying namespace "services-1725" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":593,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 64 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
        should perform rolling updates and roll backs of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":82,"skipped":1466,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:26:19.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71" in namespace "downward-api-6170" to be "Succeeded or Failed"

    Sep 17 01:26:19.407: INFO: Pod "downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765805ms
    Sep 17 01:26:21.412: INFO: Pod "downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007383215s
    STEP: Saw pod success
    Sep 17 01:26:21.412: INFO: Pod "downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71" satisfied condition "Succeeded or Failed"

    Sep 17 01:26:21.415: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:26:21.444: INFO: Waiting for pod downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71 to disappear
    Sep 17 01:26:21.449: INFO: Pod downwardapi-volume-9851a40c-0183-4e4e-b4ed-9a65ce631f71 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:21.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6170" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1472,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:26:21.509: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-1eb53eb0-6aac-4390-9e66-412a809db6a0
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:26:21.549: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab" in namespace "projected-3215" to be "Succeeded or Failed"

    Sep 17 01:26:21.554: INFO: Pod "pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480449ms
    Sep 17 01:26:23.557: INFO: Pod "pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008220589s
    STEP: Saw pod success
    Sep 17 01:26:23.557: INFO: Pod "pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab" satisfied condition "Succeeded or Failed"

    Sep 17 01:26:23.560: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:26:23.587: INFO: Waiting for pod pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab to disappear
    Sep 17 01:26:23.590: INFO: Pod pod-projected-configmaps-7fe8b373-f0a7-4afa-a946-2dd93acef1ab no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:23.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1513,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:23.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2374" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":85,"skipped":1514,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:26.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1021" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":31,"skipped":595,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:26:26.635: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d" in namespace "downward-api-9023" to be "Succeeded or Failed"

    Sep 17 01:26:26.639: INFO: Pod "downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425978ms
    Sep 17 01:26:28.643: INFO: Pod "downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00824245s
    STEP: Saw pod success
    Sep 17 01:26:28.643: INFO: Pod "downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d" satisfied condition "Succeeded or Failed"

    Sep 17 01:26:28.647: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-wkpgc pod downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:26:28.675: INFO: Waiting for pod downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d to disappear
    Sep 17 01:26:28.678: INFO: Pod downwardapi-volume-a886fe44-b32d-4f1a-8e95-d83846d4353d no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:28.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9023" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":636,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:26:28.689: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating the pod
    Sep 17 01:26:28.726: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:31.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-8076" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":33,"skipped":636,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-2930" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":86,"skipped":1515,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:45.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9410" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":87,"skipped":1527,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:26:45.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551" in namespace "downward-api-3251" to be "Succeeded or Failed"

    Sep 17 01:26:45.411: INFO: Pod "downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551": Phase="Pending", Reason="", readiness=false. Elapsed: 5.120554ms
    Sep 17 01:26:47.415: INFO: Pod "downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009162088s
    STEP: Saw pod success
    Sep 17 01:26:47.415: INFO: Pod "downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551" satisfied condition "Succeeded or Failed"

    Sep 17 01:26:47.419: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:26:47.435: INFO: Waiting for pod downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551 to disappear
    Sep 17 01:26:47.438: INFO: Pod downwardapi-volume-5e6094de-1164-4fbd-92ae-e9041c4ef551 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:26:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":88,"skipped":1554,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:27:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-5888" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":89,"skipped":1589,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:27:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3512" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":90,"skipped":1621,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 264 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  31s   default-scheduler  Successfully assigned pod-network-test-5446/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     31s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    31s   kubelet            Created container webserver
      Normal  Started    31s   kubelet            Started container webserver
    
    Sep 17 01:16:17.375: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.0.14&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 17 01:16:17.375: INFO: ...failed...will try again in next pass

    Sep 17 01:16:17.375: INFO: Breadth first check of 192.168.1.36 on host 172.18.0.5...
    Sep 17 01:16:17.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.1.36&port=8080&tries=1'] Namespace:pod-network-test-5446 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 17 01:16:17.378: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 17 01:16:22.484: INFO: Waiting for responses: map[netserver-1:{}]
    Sep 17 01:16:24.484: INFO: 
    Output of kubectl describe pod pod-network-test-5446/netserver-0:
... skipping 232 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  39s   default-scheduler  Successfully assigned pod-network-test-5446/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     39s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    39s   kubelet            Created container webserver
      Normal  Started    39s   kubelet            Started container webserver
    
    Sep 17 01:16:25.023: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.1.36&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 17 01:16:25.023: INFO: ...failed...will try again in next pass

    Sep 17 01:16:25.023: INFO: Breadth first check of 192.168.2.36 on host 172.18.0.7...
    Sep 17 01:16:25.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.2.36&port=8080&tries=1'] Namespace:pod-network-test-5446 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 17 01:16:25.027: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 17 01:16:25.116: INFO: Waiting for responses: map[]
    Sep 17 01:16:25.116: INFO: reached 192.168.2.36 after 0/1 tries
    Sep 17 01:16:25.116: INFO: Breadth first check of 192.168.6.53 on host 172.18.0.6...
... skipping 379 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  6m6s  default-scheduler  Successfully assigned pod-network-test-5446/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     6m6s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    6m6s  kubelet            Created container webserver
      Normal  Started    6m6s  kubelet            Started container webserver
    
    Sep 17 01:21:52.173: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.0.14&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 17 01:21:52.173: INFO: ... Done probing pod [[[ 192.168.0.14 ]]]
    Sep 17 01:21:52.173: INFO: succeeded at polling 3 out of 4 connections
... skipping 374 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  11m   default-scheduler  Successfully assigned pod-network-test-5446/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     11m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    11m   kubelet            Created container webserver
      Normal  Started    11m   kubelet            Started container webserver
    
    Sep 17 01:27:19.067: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.1.36&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 17 01:27:19.067: INFO: ... Done probing pod [[[ 192.168.1.36 ]]]
    Sep 17 01:27:19.067: INFO: succeeded at polling 2 out of 4 connections
    Sep 17 01:27:19.067: INFO: pod polling failure summary:
    Sep 17 01:27:19.067: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.0.14&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}]
    Sep 17 01:27:19.067: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.42:9080/dial?request=hostname&protocol=http&host=192.168.1.36&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}]
    Sep 17 01:27:19.067: FAIL: failed,  2 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common.glob..func16.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0036b6180)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
        Sep 17 01:27:19.067: failed,  2 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":921,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:25:10.149: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
    Sep 17 01:29:17.706: INFO: stdout: "\naffinity-nodeport-transition-l2gv8\naffinity-nodeport-transition-l2gv8\naffinity-nodeport-transition-l2gv8\naffinity-nodeport-transition-l2gv8\n"
    Sep 17 01:29:17.706: INFO: Received response from host: affinity-nodeport-transition-l2gv8
    Sep 17 01:29:17.706: INFO: Received response from host: affinity-nodeport-transition-l2gv8
    Sep 17 01:29:17.706: INFO: Received response from host: affinity-nodeport-transition-l2gv8
    Sep 17 01:29:17.706: INFO: Received response from host: affinity-nodeport-transition-l2gv8
    Sep 17 01:29:17.706: INFO: [affinity-nodeport-transition-zb6x8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-zb6x8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8 affinity-nodeport-transition-l2gv8]
    Sep 17 01:29:17.706: FAIL: Connection timed out or not enough responses.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc001f05ce0, 0xc00175c400, 0xc00094f5e0, 0xa, 0x7a91, 0x0, 0xc00175c400)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db
    k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000effce0, 0x56112e0, 0xc001f05ce0, 0xc000c84780, 0x1)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:29:17.706: Connection timed out or not enough responses.
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202
    ------------------------------
    {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":921,"failed":3,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:29:30.145: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating secret with name secret-test-7cfa1cbb-6bab-4fb2-949a-eaf50a09b3c7
    STEP: Creating a pod to test consume secrets
    Sep 17 01:29:30.201: INFO: Waiting up to 5m0s for pod "pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6" in namespace "secrets-6369" to be "Succeeded or Failed"

    Sep 17 01:29:30.204: INFO: Pod "pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.987872ms
    Sep 17 01:29:32.208: INFO: Pod "pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007231711s
    STEP: Saw pod success
    Sep 17 01:29:32.208: INFO: Pod "pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6" satisfied condition "Succeeded or Failed"

    Sep 17 01:29:32.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6 container secret-env-test: <nil>
    STEP: delete the pod
    Sep 17 01:29:32.236: INFO: Waiting for pod pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6 to disappear
    Sep 17 01:29:32.241: INFO: Pod pod-secrets-0406f34d-d9e4-4c6d-ac7b-f8e53a2723a6 no longer exists
    [AfterEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:29:32.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6369" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":926,"failed":3,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.568 seconds]
    [k8s.io] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":91,"skipped":1643,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:31:44.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3211" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":92,"skipped":1669,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:31:44.439: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:31:44.472: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f2a5b315-f1c2-4789-b2e7-1fe132024420" in namespace "security-context-test-9434" to be "Succeeded or Failed"

    Sep 17 01:31:44.476: INFO: Pod "alpine-nnp-false-f2a5b315-f1c2-4789-b2e7-1fe132024420": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298442ms
    Sep 17 01:31:46.480: INFO: Pod "alpine-nnp-false-f2a5b315-f1c2-4789-b2e7-1fe132024420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007389367s
    Sep 17 01:31:46.480: INFO: Pod "alpine-nnp-false-f2a5b315-f1c2-4789-b2e7-1fe132024420" satisfied condition "Succeeded or Failed"

    [AfterEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:31:46.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-9434" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":93,"skipped":1672,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 5 lines ...
    STEP: create the rc
    STEP: delete the rc
    STEP: wait for the rc to be deleted
    STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
    STEP: Gathering metrics
    W0917 01:27:12.027103      20 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:32:12.031: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    Sep 17 01:32:12.031: INFO: Deleting pod "simpletest.rc-2s245" in namespace "gc-1908"
    Sep 17 01:32:12.043: INFO: Deleting pod "simpletest.rc-9xtvk" in namespace "gc-1908"
    Sep 17 01:32:12.056: INFO: Deleting pod "simpletest.rc-g458b" in namespace "gc-1908"
    Sep 17 01:32:12.070: INFO: Deleting pod "simpletest.rc-ldzwp" in namespace "gc-1908"
    Sep 17 01:32:12.083: INFO: Deleting pod "simpletest.rc-lnctx" in namespace "gc-1908"
    Sep 17 01:32:12.091: INFO: Deleting pod "simpletest.rc-pfk8p" in namespace "gc-1908"
... skipping 10 lines ...
    • [SLOW TEST:340.280 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should orphan pods created by rc if delete options say so [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":34,"skipped":641,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:26.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-8300" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":94,"skipped":1675,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:28.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1905" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":643,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:32:26.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592" in namespace "downward-api-1759" to be "Succeeded or Failed"

    Sep 17 01:32:26.682: INFO: Pod "downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348398ms
    Sep 17 01:32:28.687: INFO: Pod "downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007077687s
    STEP: Saw pod success
    Sep 17 01:32:28.687: INFO: Pod "downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592" satisfied condition "Succeeded or Failed"

    Sep 17 01:32:28.690: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:32:28.713: INFO: Waiting for pod downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592 to disappear
    Sep 17 01:32:28.716: INFO: Pod downwardapi-volume-46fe7f55-4c9c-4164-b7e0-f585ce90b592 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:28.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1759" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1685,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:32:28.417: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep 17 01:32:30.469: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:30.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7412" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":658,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-9727-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":96,"skipped":1712,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:43.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-9947" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":97,"skipped":1740,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:43.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8470" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":98,"skipped":1749,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 17 01:32:44.546: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 17 01:32:47.568: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:47.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-8601" for this suite.
    STEP: Destroying namespace "webhook-8601-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":99,"skipped":1756,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:32:47.716: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name projected-secret-test-b711f3b2-f14f-4aac-a071-e8a319c434e1
    STEP: Creating a pod to test consume secrets
    Sep 17 01:32:47.763: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014" in namespace "projected-3898" to be "Succeeded or Failed"

    Sep 17 01:32:47.766: INFO: Pod "pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090645ms
    Sep 17 01:32:49.770: INFO: Pod "pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006990791s
    STEP: Saw pod success
    Sep 17 01:32:49.770: INFO: Pod "pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014" satisfied condition "Succeeded or Failed"

    Sep 17 01:32:49.773: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:32:49.789: INFO: Waiting for pod pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014 to disappear
    Sep 17 01:32:49.791: INFO: Pod pod-projected-secrets-f61dc4fb-fa12-4e7b-81bd-f0eeb5cf9014 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:49.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3898" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1772,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:32:51.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9479" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":101,"skipped":1782,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:08.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-504" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":102,"skipped":1784,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:11.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-1238" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":103,"skipped":1827,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:33:11.442: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward api env vars
    Sep 17 01:33:11.475: INFO: Waiting up to 5m0s for pod "downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e" in namespace "downward-api-4952" to be "Succeeded or Failed"

    Sep 17 01:33:11.478: INFO: Pod "downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543063ms
    Sep 17 01:33:13.482: INFO: Pod "downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006684671s
    STEP: Saw pod success
    Sep 17 01:33:13.482: INFO: Pod "downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e" satisfied condition "Succeeded or Failed"

    Sep 17 01:33:13.484: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:33:13.510: INFO: Waiting for pod downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e to disappear
    Sep 17 01:33:13.512: INFO: Pod downward-api-2114ed5f-1acc-4af6-ad67-5dbbbbe5198e no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:13.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4952" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":104,"skipped":1828,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:18.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4570" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":105,"skipped":1832,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:25.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-480" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":106,"skipped":1909,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:36.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4965" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":107,"skipped":1941,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:41.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2486" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":108,"skipped":1982,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:49.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-6564" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":109,"skipped":1988,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:33:56.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-2370" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":110,"skipped":2011,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "webhook-2339-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":111,"skipped":2050,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating pod pod-subpath-test-downwardapi-mfsw
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 17 01:33:59.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mfsw" in namespace "subpath-6189" to be "Succeeded or Failed"

    Sep 17 01:33:59.808: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793416ms
    Sep 17 01:34:01.812: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.007828656s
    Sep 17 01:34:03.816: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 4.011966809s
    Sep 17 01:34:05.821: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 6.016102436s
    Sep 17 01:34:07.826: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 8.021582036s
    Sep 17 01:34:09.830: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 10.025421566s
    Sep 17 01:34:11.834: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 12.029641459s
    Sep 17 01:34:13.838: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 14.033958332s
    Sep 17 01:34:15.844: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 16.039192053s
    Sep 17 01:34:17.848: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 18.043120304s
    Sep 17 01:34:19.851: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Running", Reason="", readiness=true. Elapsed: 20.046867323s
    Sep 17 01:34:21.856: INFO: Pod "pod-subpath-test-downwardapi-mfsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051059625s
    STEP: Saw pod success
    Sep 17 01:34:21.856: INFO: Pod "pod-subpath-test-downwardapi-mfsw" satisfied condition "Succeeded or Failed"

    Sep 17 01:34:21.858: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-subpath-test-downwardapi-mfsw container test-container-subpath-downwardapi-mfsw: <nil>
    STEP: delete the pod
    Sep 17 01:34:21.874: INFO: Waiting for pod pod-subpath-test-downwardapi-mfsw to disappear
    Sep 17 01:34:21.880: INFO: Pod pod-subpath-test-downwardapi-mfsw no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-mfsw
    Sep 17 01:34:21.880: INFO: Deleting pod "pod-subpath-test-downwardapi-mfsw" in namespace "subpath-6189"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:21.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6189" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":112,"skipped":2075,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:21.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5398" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":113,"skipped":2077,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:34:21.960: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 17 01:34:21.999: INFO: Waiting up to 5m0s for pod "pod-d328f01f-47bc-4274-9b71-7b033e827365" in namespace "emptydir-9646" to be "Succeeded or Failed"

    Sep 17 01:34:22.003: INFO: Pod "pod-d328f01f-47bc-4274-9b71-7b033e827365": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575589ms
    Sep 17 01:34:24.008: INFO: Pod "pod-d328f01f-47bc-4274-9b71-7b033e827365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008392851s
    STEP: Saw pod success
    Sep 17 01:34:24.008: INFO: Pod "pod-d328f01f-47bc-4274-9b71-7b033e827365" satisfied condition "Succeeded or Failed"

    Sep 17 01:34:24.012: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-d328f01f-47bc-4274-9b71-7b033e827365 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:34:24.029: INFO: Waiting for pod pod-d328f01f-47bc-4274-9b71-7b033e827365 to disappear
    Sep 17 01:34:24.032: INFO: Pod pod-d328f01f-47bc-4274-9b71-7b033e827365 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:24.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9646" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":114,"skipped":2078,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:27.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-4888" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":115,"skipped":2103,"failed":0}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:34:27.714: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name projected-configmap-test-volume-map-9765affd-f4bc-40a0-9385-33f7086f4888
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:34:27.757: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1" in namespace "projected-4160" to be "Succeeded or Failed"

    Sep 17 01:34:27.761: INFO: Pod "pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829798ms
    Sep 17 01:34:29.765: INFO: Pod "pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006856942s
    STEP: Saw pod success
    Sep 17 01:34:29.765: INFO: Pod "pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1" satisfied condition "Succeeded or Failed"

    Sep 17 01:34:29.767: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:34:29.785: INFO: Waiting for pod pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1 to disappear
    Sep 17 01:34:29.788: INFO: Pod pod-projected-configmaps-b29ca0df-43c0-41da-9e77-e077515a44c1 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:29.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4160" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":116,"skipped":2103,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:30.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-9250" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":117,"skipped":2121,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 6 lines ...
    STEP: Wait for the Deployment to create new ReplicaSet
    STEP: delete the deployment
    STEP: wait for all rs to be garbage collected
    STEP: expected 0 pods, got 2 pods
    STEP: Gathering metrics
    W0917 01:29:33.346159      16 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:34:33.350: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    [AfterEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:33.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-5463" for this suite.
    
    
    • [SLOW TEST:301.094 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should delete RS created by deployment when not orphaning [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":39,"skipped":935,"failed":3,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:36.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8890" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":118,"skipped":2134,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:34:39.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1615" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":958,"failed":3,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:34:36.764: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: creating the pod
    Sep 17 01:34:36.791: INFO: PodSpec: initContainers in spec.initContainers
    Sep 17 01:35:21.886: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-67c94022-523d-42c6-9957-989a6973a231", GenerateName:"", Namespace:"init-container-9695", SelfLink:"", UID:"bb4e34a8-d7ae-4c36-99f2-abd1c669022a", ResourceVersion:"19662", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63798975276, loc:(*time.Location)(0x798e100)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"791510135"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003477f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003477fa0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003477fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d84000)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tkf8q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001eef640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkf8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkf8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tkf8q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0043005a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025695e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004300740)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004300790)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004300798), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00430079c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc000c06b90), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975276, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975276, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975276, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975276, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"192.168.6.159", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.6.159"}}, StartTime:(*v1.Time)(0xc003d84020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025697a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002569810)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7e23ef64218ae60599ef5a5307af2bdf1b5bb0c7232a7f3a2bcea6717873f1ca", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003d84060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003d84040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004300a1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [k8s.io] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:21.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-9695" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":119,"skipped":2136,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5574" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":120,"skipped":2141,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-6547" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":121,"skipped":2165,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:35:22.106: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 17 01:35:22.139: INFO: Waiting up to 5m0s for pod "pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2" in namespace "emptydir-120" to be "Succeeded or Failed"

    Sep 17 01:35:22.142: INFO: Pod "pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.280635ms
    Sep 17 01:35:24.147: INFO: Pod "pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007609291s
    STEP: Saw pod success
    Sep 17 01:35:24.147: INFO: Pod "pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2" satisfied condition "Succeeded or Failed"

    Sep 17 01:35:24.150: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:35:24.165: INFO: Waiting for pod pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2 to disappear
    Sep 17 01:35:24.168: INFO: Pod pod-e9ea59f4-ccbd-488e-9a1a-dacda7806cf2 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:24.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-120" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":122,"skipped":2175,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:35:24.180: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:35:24.216: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e5e57cbb-c8cc-45fc-910a-6620de2b5121" in namespace "security-context-test-3416" to be "Succeeded or Failed"

    Sep 17 01:35:24.220: INFO: Pod "busybox-user-65534-e5e57cbb-c8cc-45fc-910a-6620de2b5121": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444631ms
    Sep 17 01:35:26.224: INFO: Pod "busybox-user-65534-e5e57cbb-c8cc-45fc-910a-6620de2b5121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007328102s
    Sep 17 01:35:26.224: INFO: Pod "busybox-user-65534-e5e57cbb-c8cc-45fc-910a-6620de2b5121" satisfied condition "Succeeded or Failed"

    [AfterEach] [k8s.io] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:26.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3416" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":123,"skipped":2176,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-projected-all-test-volume-ccf75d83-6087-43e7-81b9-3dc8391ccdea
    STEP: Creating secret with name secret-projected-all-test-volume-0df0805d-ac78-4434-a81f-c65311b80062
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep 17 01:35:26.291: INFO: Waiting up to 5m0s for pod "projected-volume-65b28721-9ca4-4455-b832-9355de709aab" in namespace "projected-2371" to be "Succeeded or Failed"

    Sep 17 01:35:26.294: INFO: Pod "projected-volume-65b28721-9ca4-4455-b832-9355de709aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92341ms
    Sep 17 01:35:28.298: INFO: Pod "projected-volume-65b28721-9ca4-4455-b832-9355de709aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007064514s
    STEP: Saw pod success
    Sep 17 01:35:28.298: INFO: Pod "projected-volume-65b28721-9ca4-4455-b832-9355de709aab" satisfied condition "Succeeded or Failed"

    Sep 17 01:35:28.301: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod projected-volume-65b28721-9ca4-4455-b832-9355de709aab container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:35:28.317: INFO: Waiting for pod projected-volume-65b28721-9ca4-4455-b832-9355de709aab to disappear
    Sep 17 01:35:28.320: INFO: Pod projected-volume-65b28721-9ca4-4455-b832-9355de709aab no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:28.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2371" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":124,"skipped":2181,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:35:28.333: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 17 01:35:28.374: INFO: Waiting up to 5m0s for pod "pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb" in namespace "emptydir-5276" to be "Succeeded or Failed"

    Sep 17 01:35:28.376: INFO: Pod "pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377942ms
    Sep 17 01:35:30.381: INFO: Pod "pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006769079s
    STEP: Saw pod success
    Sep 17 01:35:30.381: INFO: Pod "pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb" satisfied condition "Succeeded or Failed"

    Sep 17 01:35:30.384: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:35:30.399: INFO: Waiting for pod pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb to disappear
    Sep 17 01:35:30.403: INFO: Pod pod-b054a02e-ba37-48a2-82db-f3e97ce5a7bb no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:30.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5276" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":125,"skipped":2183,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:35:50.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-1221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":126,"skipped":2201,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-4501-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":127,"skipped":2207,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "services-9029" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":128,"skipped":2257,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-5524" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":129,"skipped":2263,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:42.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-4796" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":130,"skipped":2360,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:43.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7976" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":131,"skipped":2392,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:36:43.530: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 17 01:36:43.569: INFO: Waiting up to 5m0s for pod "pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a" in namespace "emptydir-1950" to be "Succeeded or Failed"

    Sep 17 01:36:43.573: INFO: Pod "pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.508397ms
    Sep 17 01:36:45.577: INFO: Pod "pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007713111s
    STEP: Saw pod success
    Sep 17 01:36:45.577: INFO: Pod "pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a" satisfied condition "Succeeded or Failed"

    Sep 17 01:36:45.580: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:36:45.599: INFO: Waiting for pod pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a to disappear
    Sep 17 01:36:45.603: INFO: Pod pod-72d89ecf-8baf-4e3f-9018-d0aa3e4bf58a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:45.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1950" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":132,"skipped":2426,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1584" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":133,"skipped":2440,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep 17 01:36:48.269: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep 17 01:36:48.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-884 describe pod agnhost-primary-sz74l'
    Sep 17 01:36:48.396: INFO: stderr: ""
    Sep 17 01:36:48.396: INFO: stdout: "Name:         agnhost-primary-sz74l\nNamespace:    kubectl-884\nPriority:     0\nNode:         k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr/172.18.0.6\nStart Time:   Sat, 17 Sep 2022 01:36:47 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.6.173\nIPs:\n  IP:           192.168.6.173\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://3724a6164a36b1b76a96680426efb64af0814a51e3ce004ee64bc0a118c38c6c\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 17 Sep 2022 01:36:47 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sw7n4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-sw7n4:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-sw7n4\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-884/agnhost-primary-sz74l to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
    Sep 17 01:36:48.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-884 describe rc agnhost-primary'
    Sep 17 01:36:48.546: INFO: stderr: ""
    Sep 17 01:36:48.546: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-884\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  1s    replication-controller  Created pod: agnhost-primary-sz74l\n"

    Sep 17 01:36:48.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-884 describe service agnhost-primary'
    Sep 17 01:36:48.705: INFO: stderr: ""
    Sep 17 01:36:48.705: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-884\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Families:       <none>\nIP:                10.132.204.111\nIPs:               10.132.204.111\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.6.173:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep 17 01:36:48.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-884 describe node k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt'
    Sep 17 01:36:48.879: INFO: stderr: ""
    Sep 17 01:36:48.879: INFO: stdout: "Name:               k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-8gqwip\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-yh3rl6\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-8gqwip-d5zcg\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 17 Sep 2022 00:57:17 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 17 Sep 2022 01:36:46 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 17 Sep 2022 01:33:08 +0000   Sat, 17 Sep 2022 00:57:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 17 Sep 2022 01:33:08 +0000   Sat, 17 Sep 2022 00:57:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 17 Sep 2022 01:33:08 +0000   Sat, 17 Sep 2022 00:57:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 17 Sep 2022 01:33:08 +0000   Sat, 17 Sep 2022 00:58:00 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 00f9c44ad51e44c486ff062acf4d21e1\n  System UUID:                c5252e30-dda3-4798-9866-c058e94fa7b0\n  Boot ID:                    58babb72-1e89-4be4-b694-651c5f3e2431\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.20.15\n  Kube-Proxy Version:         v1.20.15\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39m\n  kube-system                 kindnet-7m5cq                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      39m\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39m\n  kube-system                 kube-proxy-vgk2d                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From        Message\n  ----     ------                    ----               ----        -------\n  Normal   Starting                  39m                kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity       39m                kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory   39m (x2 over 39m)  kubelet     Node k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     39m (x2 over 39m)  kubelet     Node k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      39m (x2 over 39m)  kubelet     Node k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  39m                kubelet     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   39m                kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                  39m                kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                 38m                kubelet     Node k8s-upgrade-and-conformance-8gqwip-d5zcg-rtjtt status is now: NodeReady\n  Normal   Starting                  34m                kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:49.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-884" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":134,"skipped":2464,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:36:49.027: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test substitution in container's args
    Sep 17 01:36:49.063: INFO: Waiting up to 5m0s for pod "var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6" in namespace "var-expansion-7894" to be "Succeeded or Failed"

    Sep 17 01:36:49.066: INFO: Pod "var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912726ms
    Sep 17 01:36:51.070: INFO: Pod "var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006683329s
    STEP: Saw pod success
    Sep 17 01:36:51.070: INFO: Pod "var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6" satisfied condition "Succeeded or Failed"

    Sep 17 01:36:51.073: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:36:51.104: INFO: Waiting for pod var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6 to disappear
    Sep 17 01:36:51.107: INFO: Pod var-expansion-461f9376-f84b-4b54-a979-67db9dc90ac6 no longer exists
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:51.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7894" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":135,"skipped":2468,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:36:52.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-4570" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":136,"skipped":2472,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:37:00.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7195" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":137,"skipped":2474,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:37:13.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3948" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":138,"skipped":2479,"failed":0}

    
    SSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":499,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:27:19.083: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 258 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  25s   default-scheduler  Successfully assigned pod-network-test-90/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     25s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    25s   kubelet            Created container webserver
      Normal  Started    25s   kubelet            Started container webserver
    
    Sep 17 01:27:44.740: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.0.49&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 17 01:27:44.740: INFO: ...failed...will try again in next pass

    Sep 17 01:27:44.740: INFO: Breadth first check of 192.168.1.80 on host 172.18.0.5...
    Sep 17 01:27:44.744: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.1.80&port=8080&tries=1'] Namespace:pod-network-test-90 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 17 01:27:44.744: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 17 01:27:49.826: INFO: Waiting for responses: map[netserver-1:{}]
    Sep 17 01:27:51.826: INFO: 
    Output of kubectl describe pod pod-network-test-90/netserver-0:
... skipping 232 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  33s   default-scheduler  Successfully assigned pod-network-test-90/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     33s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    33s   kubelet            Created container webserver
      Normal  Started    33s   kubelet            Started container webserver
    
    Sep 17 01:27:52.252: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.1.80&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 17 01:27:52.253: INFO: ...failed...will try again in next pass

    Sep 17 01:27:52.253: INFO: Breadth first check of 192.168.2.84 on host 172.18.0.7...
    Sep 17 01:27:52.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.2.84&port=8080&tries=1'] Namespace:pod-network-test-90 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep 17 01:27:52.256: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 17 01:27:52.355: INFO: Waiting for responses: map[]
    Sep 17 01:27:52.355: INFO: reached 192.168.2.84 after 0/1 tries
    Sep 17 01:27:52.355: INFO: Breadth first check of 192.168.6.136 on host 172.18.0.6...
... skipping 379 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  6m    default-scheduler  Successfully assigned pod-network-test-90/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     6m    kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    6m    kubelet            Created container webserver
      Normal  Started    6m    kubelet            Started container webserver
    
    Sep 17 01:33:19.338: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.0.49&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep 17 01:33:19.339: INFO: ... Done probing pod [[[ 192.168.0.49 ]]]
    Sep 17 01:33:19.339: INFO: succeeded at polling 3 out of 4 connections
... skipping 374 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  11m   default-scheduler  Successfully assigned pod-network-test-90/netserver-3 to k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr
      Normal  Pulled     11m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine
      Normal  Created    11m   kubelet            Created container webserver
      Normal  Started    11m   kubelet            Started container webserver
    
    Sep 17 01:38:45.957: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.1.80&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep 17 01:38:45.957: INFO: ... Done probing pod [[[ 192.168.1.80 ]]]
    Sep 17 01:38:45.957: INFO: succeeded at polling 2 out of 4 connections
    Sep 17 01:38:45.957: INFO: pod polling failure summary:
    Sep 17 01:38:45.957: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.0.49&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}]
    Sep 17 01:38:45.957: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.85:9080/dial?request=hostname&protocol=http&host=192.168.1.80&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}]
    Sep 17 01:38:45.957: FAIL: failed,  2 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common.glob..func16.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0036b6180)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
        Sep 17 01:38:45.957: failed,  2 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":499,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:38:45.977: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:39:06.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-1308" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":499,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 38 lines ...
    Sep 17 01:32:36.867: INFO: stderr: ""
    Sep 17 01:32:36.867: INFO: stdout: "true"
    Sep 17 01:32:36.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2830 get pods update-demo-nautilus-hjfsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 17 01:32:36.956: INFO: stderr: ""
    Sep 17 01:32:36.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
    Sep 17 01:32:36.956: INFO: validating pod update-demo-nautilus-hjfsv
    Sep 17 01:36:11.278: INFO: update-demo-nautilus-hjfsv is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-hjfsv)

    Sep 17 01:36:16.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2830 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 17 01:36:16.371: INFO: stderr: ""
    Sep 17 01:36:16.371: INFO: stdout: "update-demo-nautilus-f8bsg update-demo-nautilus-hjfsv "
    Sep 17 01:36:16.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2830 get pods update-demo-nautilus-f8bsg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 17 01:36:16.460: INFO: stderr: ""
    Sep 17 01:36:16.460: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 17 01:36:16.658: INFO: stderr: ""
    Sep 17 01:36:16.658: INFO: stdout: "true"
    Sep 17 01:36:16.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2830 get pods update-demo-nautilus-hjfsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 17 01:36:16.745: INFO: stderr: ""
    Sep 17 01:36:16.745: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
    Sep 17 01:36:16.745: INFO: validating pod update-demo-nautilus-hjfsv
    Sep 17 01:39:50.414: INFO: update-demo-nautilus-hjfsv is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-hjfsv)

    Sep 17 01:39:55.414: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e4b380)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 51 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:40:06.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1786" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":519,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:40:06.527: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test override command
    Sep 17 01:40:06.567: INFO: Waiting up to 5m0s for pod "client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459" in namespace "containers-9232" to be "Succeeded or Failed"

    Sep 17 01:40:06.569: INFO: Pod "client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645016ms
    Sep 17 01:40:08.573: INFO: Pod "client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006052797s
    STEP: Saw pod success
    Sep 17 01:40:08.573: INFO: Pod "client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459" satisfied condition "Succeeded or Failed"

    Sep 17 01:40:08.576: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:40:08.604: INFO: Waiting for pod client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459 to disappear
    Sep 17 01:40:08.608: INFO: Pod client-containers-314e0dc5-de0c-4e02-b9a8-ea60ebb53459 no longer exists
    [AfterEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:40:08.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-9232" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":524,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 135 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:41:10.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5699" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":31,"skipped":528,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:41:16.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7267" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":537,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Creating replication controller my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd
    Sep 17 01:34:40.084: INFO: Pod name my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd: Found 0 pods out of 1
    Sep 17 01:34:45.087: INFO: Pod name my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd: Found 1 pods out of 1
    Sep 17 01:34:45.087: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd" are running
    Sep 17 01:34:45.090: INFO: Pod "my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd-zfrtg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-17 01:34:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-17 01:34:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-17 01:34:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-17 01:34:40 +0000 UTC Reason: Message:}])
    Sep 17 01:34:45.090: INFO: Trying to dial the pod
    Sep 17 01:38:24.398: INFO: Controller my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd: Failed to GET from replica 1 [my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd-zfrtg]: an error on the server ("unknown") has prevented the request from succeeding (get pods my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd-zfrtg)

    pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975280, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975281, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975281, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975280, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.92", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.92"}}, StartTime:(*v1.Time)(0xc0027158a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002715940), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.21", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a", ContainerID:"containerd://85ea478ab31b4c57b4090f29f0bef0495083b54c7d18429aa831a0a771f151a0", Started:(*bool)(0xc0041a451a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
    Sep 17 01:41:57.394: INFO: Controller my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd: Failed to GET from replica 1 [my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd-zfrtg]: an error on the server ("unknown") has prevented the request from succeeding (get pods my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd-zfrtg)

    pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975280, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975281, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975281, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798975280, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.92", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.92"}}, StartTime:(*v1.Time)(0xc0027158a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-e62eee54-7441-4cda-973c-73a18f919ebd", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002715940), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.21", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a", ContainerID:"containerd://85ea478ab31b4c57b4090f29f0bef0495083b54c7d18429aa831a0a771f151a0", Started:(*bool)(0xc0041a451a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
    Sep 17 01:41:57.394: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/apps.glob..func8.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0027cdb00)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
      Sep 17 01:41:57.394: Did not get expected responses within the timeout period of 120.00 seconds.
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65
    ------------------------------
    {"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":40,"skipped":1000,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:41:57.410: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replication-controller
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:07.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7876" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":41,"skipped":1000,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:42:07.505: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test env composition
    Sep 17 01:42:07.543: INFO: Waiting up to 5m0s for pod "var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227" in namespace "var-expansion-1811" to be "Succeeded or Failed"

    Sep 17 01:42:07.547: INFO: Pod "var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086101ms
    Sep 17 01:42:09.550: INFO: Pod "var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006695776s
    STEP: Saw pod success
    Sep 17 01:42:09.550: INFO: Pod "var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227" satisfied condition "Succeeded or Failed"

    Sep 17 01:42:09.553: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:42:09.572: INFO: Waiting for pod var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227 to disappear
    Sep 17 01:42:09.574: INFO: Pod var-expansion-0b62a5f8-4128-4d18-a9b9-1ebb70141227 no longer exists
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:09.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-1811" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":1017,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: create the rc
    STEP: delete the rc
    STEP: wait for all pods to be garbage collected
    STEP: Gathering metrics
    W0917 01:37:23.462380      18 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
    Sep 17 01:42:23.466: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.

    [AfterEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:23.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2811" for this suite.
    
    
    • [SLOW TEST:310.080 seconds]
    [sig-api-machinery] Garbage collector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should delete pods created by rc when not orphaning [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":139,"skipped":2486,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:42:23.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7" in namespace "projected-9987" to be "Succeeded or Failed"

    Sep 17 01:42:23.525: INFO: Pod "downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092136ms
    Sep 17 01:42:25.529: INFO: Pod "downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005812972s
    STEP: Saw pod success
    Sep 17 01:42:25.529: INFO: Pod "downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7" satisfied condition "Succeeded or Failed"

    Sep 17 01:42:25.532: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:42:25.562: INFO: Waiting for pod downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7 to disappear
    Sep 17 01:42:25.566: INFO: Pod downwardapi-volume-be885bf7-aed9-4902-bd77-1a7a846292d7 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:25.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9987" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":140,"skipped":2494,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":141,"skipped":2495,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:42:49.141: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test substitution in container's command
    Sep 17 01:42:49.179: INFO: Waiting up to 5m0s for pod "var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184" in namespace "var-expansion-3712" to be "Succeeded or Failed"

    Sep 17 01:42:49.183: INFO: Pod "var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184": Phase="Pending", Reason="", readiness=false. Elapsed: 4.677714ms
    Sep 17 01:42:51.187: INFO: Pod "var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008400906s
    STEP: Saw pod success
    Sep 17 01:42:51.187: INFO: Pod "var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184" satisfied condition "Succeeded or Failed"

    Sep 17 01:42:51.190: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184 container dapi-container: <nil>
    STEP: delete the pod
    Sep 17 01:42:51.205: INFO: Waiting for pod var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184 to disappear
    Sep 17 01:42:51.207: INFO: Pod var-expansion-d38114c7-af45-4ccb-bfe5-40ac5224d184 no longer exists
    [AfterEach] [k8s.io] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:42:51.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3712" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":142,"skipped":2545,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:08.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5837" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":143,"skipped":2546,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep 17 01:43:13.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3669 explain e2e-test-crd-publish-openapi-4556-crds.spec'
    Sep 17 01:43:13.906: INFO: stderr: ""
    Sep 17 01:43:13.906: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4556-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep 17 01:43:13.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3669 explain e2e-test-crd-publish-openapi-4556-crds.spec.bars'
    Sep 17 01:43:14.132: INFO: stderr: ""
    Sep 17 01:43:14.132: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4556-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep 17 01:43:14.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3669 explain e2e-test-crd-publish-openapi-4556-crds.spec.bars2'
    Sep 17 01:43:14.369: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:16.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":144,"skipped":2553,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:16.606: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 17 01:43:16.645: INFO: Waiting up to 5m0s for pod "pod-871734c9-f469-4be3-8d1d-7ba4940247f4" in namespace "emptydir-8478" to be "Succeeded or Failed"

    Sep 17 01:43:16.648: INFO: Pod "pod-871734c9-f469-4be3-8d1d-7ba4940247f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791872ms
    Sep 17 01:43:18.653: INFO: Pod "pod-871734c9-f469-4be3-8d1d-7ba4940247f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007175269s
    STEP: Saw pod success
    Sep 17 01:43:18.653: INFO: Pod "pod-871734c9-f469-4be3-8d1d-7ba4940247f4" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:18.656: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod pod-871734c9-f469-4be3-8d1d-7ba4940247f4 container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:18.677: INFO: Waiting for pod pod-871734c9-f469-4be3-8d1d-7ba4940247f4 to disappear
    Sep 17 01:43:18.681: INFO: Pod pod-871734c9-f469-4be3-8d1d-7ba4940247f4 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:18.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8478" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":145,"skipped":2565,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:18.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-1165" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":146,"skipped":2599,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:25.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3126" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":1020,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:43:26.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8" in namespace "projected-4175" to be "Succeeded or Failed"

    Sep 17 01:43:26.068: INFO: Pod "downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629269ms
    Sep 17 01:43:28.072: INFO: Pod "downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007230281s
    STEP: Saw pod success
    Sep 17 01:43:28.073: INFO: Pod "downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:28.076: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:28.091: INFO: Waiting for pod downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8 to disappear
    Sep 17 01:43:28.094: INFO: Pod downwardapi-volume-adec68a7-83bd-42cb-80fd-dd8e38b061e8 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:28.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":1071,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:43:28.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5" in namespace "downward-api-796" to be "Succeeded or Failed"

    Sep 17 01:43:28.164: INFO: Pod "downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.168772ms
    Sep 17 01:43:30.170: INFO: Pod "downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009760088s
    STEP: Saw pod success
    Sep 17 01:43:30.170: INFO: Pod "downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:30.173: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:30.188: INFO: Waiting for pod downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5 to disappear
    Sep 17 01:43:30.191: INFO: Pod downwardapi-volume-44a590a0-4eac-47ab-8a9a-0ae9a14e3bb5 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:30.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-796" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":1082,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:30.202: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-963-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":46,"skipped":1082,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:35.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-2212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":47,"skipped":1096,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
    Sep 17 01:43:24.385: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:36.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-8578" for this suite.
    STEP: Destroying namespace "webhook-8578-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":147,"skipped":2606,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:35.845: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name projected-secret-test-map-a4c8abf0-c279-4567-b607-0bf5039d697e
    STEP: Creating a pod to test consume secrets
    Sep 17 01:43:35.884: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67" in namespace "projected-8153" to be "Succeeded or Failed"

    Sep 17 01:43:35.889: INFO: Pod "pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479106ms
    Sep 17 01:43:37.893: INFO: Pod "pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008441042s
    STEP: Saw pod success
    Sep 17 01:43:37.893: INFO: Pod "pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:37.896: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-08uw3p pod pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:43:37.912: INFO: Waiting for pod pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67 to disappear
    Sep 17 01:43:37.916: INFO: Pod pod-projected-secrets-f98fd341-3295-4728-81a2-e088ae8f5c67 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 10 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test downward API volume plugin
    Sep 17 01:43:36.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0" in namespace "downward-api-332" to be "Succeeded or Failed"

    Sep 17 01:43:36.688: INFO: Pod "downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.362842ms
    Sep 17 01:43:38.693: INFO: Pod "downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010631775s
    STEP: Saw pod success
    Sep 17 01:43:38.693: INFO: Pod "downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:38.697: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0 container client-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:38.721: INFO: Waiting for pod downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0 to disappear
    Sep 17 01:43:38.724: INFO: Pod downwardapi-volume-d916421b-3656-4d0c-8d1d-0daab2f2afb0 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:38.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-332" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":148,"skipped":2630,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:38.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9778" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":149,"skipped":2645,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:38.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-12" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":150,"skipped":2658,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:142.371 seconds]
    [k8s.io] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    ------------------------------
    {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":548,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:39.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5296" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":34,"skipped":554,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-2391" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":35,"skipped":558,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [k8s.io] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:41.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-646" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1118,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:37.927: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:41.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5586" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1118,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":151,"skipped":2667,"failed":0}

    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:41.050: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating projection with secret that has name projected-secret-test-7935cd62-07a6-49af-a77c-1542755b7f66
    STEP: Creating a pod to test consume secrets
    Sep 17 01:43:41.085: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34" in namespace "projected-3232" to be "Succeeded or Failed"

    Sep 17 01:43:41.089: INFO: Pod "pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974022ms
    Sep 17 01:43:43.094: INFO: Pod "pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008873122s
    STEP: Saw pod success
    Sep 17 01:43:43.095: INFO: Pod "pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:43.098: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:43:43.123: INFO: Waiting for pod pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34 to disappear
    Sep 17 01:43:43.126: INFO: Pod pod-projected-secrets-38651b70-1f9a-4b54-b03c-c2bc36dd0f34 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:43.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3232" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":152,"skipped":2667,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:42.024: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating configMap with name configmap-test-volume-25417d81-3788-4e27-aa8f-778d36bbf72c
    STEP: Creating a pod to test consume configMaps
    Sep 17 01:43:42.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159" in namespace "configmap-1476" to be "Succeeded or Failed"

    Sep 17 01:43:42.069: INFO: Pod "pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159": Phase="Pending", Reason="", readiness=false. Elapsed: 3.128265ms
    Sep 17 01:43:44.076: INFO: Pod "pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009604086s
    STEP: Saw pod success
    Sep 17 01:43:44.076: INFO: Pod "pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:44.079: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-md-0-flcs5-5567b67d68-cgzrr pod pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 17 01:43:44.102: INFO: Waiting for pod pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159 to disappear
    Sep 17 01:43:44.106: INFO: Pod pod-configmaps-21ebaf38-93ab-49ef-8341-42c1912b6159 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:44.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1476" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1129,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:40.063: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    Sep 17 01:43:42.130: INFO: Waiting up to 5m0s for pod "client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf" in namespace "pods-5959" to be "Succeeded or Failed"

    Sep 17 01:43:42.138: INFO: Pod "client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.371955ms
    Sep 17 01:43:44.142: INFO: Pod "client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011878976s
    STEP: Saw pod success
    Sep 17 01:43:44.142: INFO: Pod "client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:44.148: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf container env3cont: <nil>
    STEP: delete the pod
    Sep 17 01:43:44.172: INFO: Waiting for pod client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf to disappear
    Sep 17 01:43:44.176: INFO: Pod client-envvars-c6525f0d-f064-469f-9a30-bd01440ee6bf no longer exists
    [AfterEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:44.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5959" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":562,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:47.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1614" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":51,"skipped":1143,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:47.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":52,"skipped":1163,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:44.300: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:50.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6928" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":37,"skipped":625,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:54.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1457" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":153,"skipped":2733,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:54.393: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test service account token: 
    Sep 17 01:43:54.431: INFO: Waiting up to 5m0s for pod "test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7" in namespace "svcaccounts-9352" to be "Succeeded or Failed"

    Sep 17 01:43:54.434: INFO: Pod "test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.391576ms
    Sep 17 01:43:56.438: INFO: Pod "test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006855031s
    STEP: Saw pod success
    Sep 17 01:43:56.438: INFO: Pod "test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:56.441: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:56.462: INFO: Waiting for pod test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7 to disappear
    Sep 17 01:43:56.466: INFO: Pod test-pod-d7bb51c4-a989-4d69-850d-2bb26bf073a7 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9352" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":154,"skipped":2773,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:57.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5751" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":38,"skipped":635,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:57.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-6147" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":39,"skipped":672,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [k8s.io] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:57.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3587" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":40,"skipped":682,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:43:57.795: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 17 01:43:57.838: INFO: Waiting up to 5m0s for pod "pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e" in namespace "emptydir-3586" to be "Succeeded or Failed"

    Sep 17 01:43:57.842: INFO: Pod "pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230774ms
    Sep 17 01:43:59.846: INFO: Pod "pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007819873s
    STEP: Saw pod success
    Sep 17 01:43:59.846: INFO: Pod "pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e" satisfied condition "Succeeded or Failed"

    Sep 17 01:43:59.849: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8gqwip-worker-s1w5gr pod pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e container test-container: <nil>
    STEP: delete the pod
    Sep 17 01:43:59.865: INFO: Waiting for pod pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e to disappear
    Sep 17 01:43:59.867: INFO: Pod pod-7dbe5066-e8ea-4bee-b415-4e68b6a9378e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:43:59.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3586" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":684,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:44:03.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1989" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":53,"skipped":1172,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:44:16.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2405" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":54,"skipped":1179,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-6128
    STEP: Creating statefulset with conflicting port in namespace statefulset-6128
    STEP: Waiting until pod test-pod will start running in namespace statefulset-6128
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6128
    Sep 17 01:44:03.942: INFO: Observed stateful pod in namespace: statefulset-6128, name: ss-0, uid: 95c9bbef-b767-49e7-9a55-7d969c6c4db1, status phase: Pending. Waiting for statefulset controller to delete.
    Sep 17 01:44:04.494: INFO: Observed stateful pod in namespace: statefulset-6128, name: ss-0, uid: 95c9bbef-b767-49e7-9a55-7d969c6c4db1, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 17 01:44:04.505: INFO: Observed stateful pod in namespace: statefulset-6128, name: ss-0, uid: 95c9bbef-b767-49e7-9a55-7d969c6c4db1, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 17 01:44:04.509: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6128
    STEP: Removing pod with conflicting port in namespace statefulset-6128
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6128 and will be in running state
    [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
    Sep 17 01:44:06.538: INFO: Deleting all statefulset in ns statefulset-6128
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:44:16.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-6128" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":42,"skipped":685,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-4271-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":55,"skipped":1181,"failed":4,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:44:27.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4681" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":43,"skipped":731,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    Sep 17 01:44:27.733: INFO: Running AfterSuite actions on all nodes
    
    
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":36,"skipped":674,"failed":5,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:39:56.405: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 96 lines ...
    Sep 17 01:40:10.442: INFO: stderr: ""
    Sep 17 01:40:10.442: INFO: stdout: "true"
    Sep 17 01:40:10.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9033 get pods update-demo-nautilus-stv25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 17 01:40:10.527: INFO: stderr: ""
    Sep 17 01:40:10.527: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
    Sep 17 01:40:10.527: INFO: validating pod update-demo-nautilus-stv25
    Sep 17 01:43:43.886: INFO: update-demo-nautilus-stv25 is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-stv25)

    Sep 17 01:43:48.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9033 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 17 01:43:48.996: INFO: stderr: ""
    Sep 17 01:43:48.996: INFO: stdout: "update-demo-nautilus-9c92l update-demo-nautilus-stv25 "
    Sep 17 01:43:48.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9033 get pods update-demo-nautilus-9c92l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 17 01:43:49.103: INFO: stderr: ""
    Sep 17 01:43:49.103: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 17 01:43:49.414: INFO: stderr: ""
    Sep 17 01:43:49.415: INFO: stdout: "true"
    Sep 17 01:43:49.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9033 get pods update-demo-nautilus-stv25 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 17 01:43:49.562: INFO: stderr: ""
    Sep 17 01:43:49.562: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
    Sep 17 01:43:49.562: INFO: validating pod update-demo-nautilus-stv25
    Sep 17 01:47:23.022: INFO: update-demo-nautilus-stv25 is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-stv25)

    Sep 17 01:47:28.023: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:338 +0x66d
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e4b380)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 34 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
    
        Sep 17 01:47:28.023: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:338
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":36,"skipped":674,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
    STEP: Creating a kubernetes client
    Sep 17 01:47:29.342: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 123 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
    Sep 17 01:47:49.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2182" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":37,"skipped":674,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    Sep 17 01:47:49.565: INFO: Running AfterSuite actions on all nodes
    
    STEP: Dumping logs from the "k8s-upgrade-and-conformance-8gqwip" workload cluster 09/17/22 01:51:19.39
    STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-yh3rl6" namespace 09/17/22 01:51:22.52
    STEP: Deleting cluster k8s-upgrade-and-conformance-yh3rl6/k8s-upgrade-and-conformance-8gqwip 09/17/22 01:51:22.832
    STEP: Deleting cluster k8s-upgrade-and-conformance-8gqwip 09/17/22 01:51:22.85
... skipping 621 lines ...
  [INTERRUPTED] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 1 of 21 Specs in 3542.006 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped


Ginkgo ran 1 suite in 1h0m14.31455049s

Test Suite Failed
make: *** [Makefile:128: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 26268
++ pgrep -f 'ctr -n moby events'
+ kill 26269
... skipping 23 lines ...