Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h5m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000b2a8d0>: { error: <*errors.withMessage | 0xc00037eb20>{ cause: <*errors.errorString | 0xc000c87460>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-z6xd2e INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-z6xd2e" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-1wcp0z" using the "upgrades-cgroupfs" template (Kubernetes v1.19.16, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-1wcp0z --infrastructure (default) --kubernetes-version v1.19.16 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-1wcp0z-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-1wcp0z-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-z6xd2e/k8s-upgrade-and-conformance-1wcp0z-g74qf to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-z6xd2e/k8s-upgrade-and-conformance-1wcp0z-g74qf to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.20.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-z6xd2e/k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg to be upgraded to v1.20.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.20.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-z6xd2e/k8s-upgrade-and-conformance-1wcp0z-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-z6xd2e/k8s-upgrade-and-conformance-1wcp0z-mp-0 to be upgraded from v1.19.16 to v1.20.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.20.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1672756741�[0m - Will randomize all specs Will run �[1m5668�[0m specs Running in parallel across �[1m4�[0m nodes Jan 3 14:39:03.757: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:39:03.761: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 3 14:39:03.780: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 3 14:39:03.838: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:03.838: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:03.838: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:03.838: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:03.838: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:03.838: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 3 14:39:03.838: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:03.838: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:03.838: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:03.838: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:03.838: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:03.838: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:03.838: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:03.838: INFO: Jan 3 14:39:05.863: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:05.863: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:05.863: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:05.863: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:05.863: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:05.863: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 3 14:39:05.863: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:05.863: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:05.863: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:05.863: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:05.863: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:05.863: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:05.863: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:05.863: INFO: Jan 3 14:39:07.863: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:07.863: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:07.863: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:07.863: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:07.863: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:07.863: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 3 14:39:07.863: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:07.863: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:07.863: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:07.864: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:07.864: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:07.864: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:07.864: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:07.864: INFO: Jan 3 14:39:09.861: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:09.861: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:09.861: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:09.861: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:09.861: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:09.861: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 3 14:39:09.861: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:09.861: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:09.861: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:09.861: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:09.861: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:09.861: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:09.861: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:09.861: INFO: Jan 3 14:39:11.862: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:11.862: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:11.862: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:11.862: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:11.862: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:11.862: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 3 14:39:11.862: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:11.862: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:11.862: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:11.862: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:11.862: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:11.862: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:11.862: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:11.862: INFO: Jan 3 14:39:13.861: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:13.862: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:13.862: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:13.862: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:13.862: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:13.862: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 3 14:39:13.862: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:13.862: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:13.862: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:13.862: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:13.862: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:13.862: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:13.862: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:13.862: INFO: Jan 3 14:39:15.861: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:15.861: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:15.861: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:15.861: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:15.861: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:15.861: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 3 14:39:15.861: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:15.861: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:15.861: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:15.862: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:15.862: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:15.862: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:15.862: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:15.862: INFO: Jan 3 14:39:17.859: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:17.859: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:17.859: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:17.859: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:17.859: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:17.859: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 3 14:39:17.859: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:17.859: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:17.859: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:17.859: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:17.859: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:17.859: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:17.859: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:17.859: INFO: Jan 3 14:39:19.885: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:19.885: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:19.885: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:19.885: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:19.885: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:19.885: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 3 14:39:19.885: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:19.885: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:19.885: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:19.885: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:19.885: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:19.885: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:19.885: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:19.885: INFO: Jan 3 14:39:21.860: INFO: The status of Pod coredns-f9fd979d6-wpplt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:21.860: INFO: The status of Pod kindnet-jpxkc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:21.860: INFO: The status of Pod kindnet-spf5w is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:21.861: INFO: The status of Pod kube-proxy-dg4bv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:21.861: INFO: The status of Pod kube-proxy-q55vk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:21.861: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 3 14:39:21.861: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:21.861: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:21.861: INFO: coredns-f9fd979d6-wpplt k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:11 +0000 UTC }] Jan 3 14:39:21.861: INFO: kindnet-jpxkc k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:03 +0000 UTC }] Jan 3 14:39:21.861: INFO: kindnet-spf5w k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:30:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:29:46 +0000 UTC }] Jan 3 14:39:21.861: INFO: kube-proxy-dg4bv k8s-upgrade-and-conformance-1wcp0z-worker-0zxwvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:36:31 +0000 UTC }] Jan 3 14:39:21.861: INFO: kube-proxy-q55vk k8s-upgrade-and-conformance-1wcp0z-worker-uckuwj Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:35:46 +0000 UTC }] Jan 3 14:39:21.861: INFO: Jan 3 14:39:23.859: INFO: The status of Pod coredns-f9fd979d6-f5slv is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:23.859: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 3 14:39:23.859: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:23.859: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:23.859: INFO: coredns-f9fd979d6-f5slv k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC }] Jan 3 14:39:23.859: INFO: Jan 3 14:39:25.857: INFO: The status of Pod coredns-f9fd979d6-f5slv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:25.857: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 3 14:39:25.857: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:25.857: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:25.857: INFO: coredns-f9fd979d6-f5slv k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC }] Jan 3 14:39:25.857: INFO: Jan 3 14:39:27.862: INFO: The status of Pod coredns-f9fd979d6-f5slv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:27.862: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 3 14:39:27.862: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:27.862: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:27.862: INFO: coredns-f9fd979d6-f5slv k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC }] Jan 3 14:39:27.862: INFO: Jan 3 14:39:29.857: INFO: The status of Pod coredns-f9fd979d6-f5slv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:29.857: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 3 14:39:29.857: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:29.857: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:29.857: INFO: coredns-f9fd979d6-f5slv k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC }] Jan 3 14:39:29.857: INFO: Jan 3 14:39:31.859: INFO: The status of Pod coredns-f9fd979d6-f5slv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 3 14:39:31.859: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 3 14:39:31.859: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 3 14:39:31.859: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 14:39:31.859: INFO: coredns-f9fd979d6-f5slv k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 14:39:23 +0000 UTC }] Jan 3 14:39:31.859: INFO: Jan 3 14:39:33.858: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Jan 3 14:39:33.858: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 3 14:39:33.858: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 3 14:39:33.866: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 3 14:39:33.866: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 3 14:39:33.866: INFO: e2e test version: v1.20.15 Jan 3 14:39:33.868: INFO: kube-apiserver version: v1.20.15 Jan 3 14:39:33.869: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:39:33.875: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 3 14:39:33.893: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:39:33.914: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 3 14:39:33.893: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:39:33.917: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 3 14:39:33.906: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:39:33.929: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:33.934: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook Jan 3 14:39:33.985: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:39:34.778: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 3 14:39:36.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:39:38.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:39:40.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353574, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:39:43.815: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:39:43.820: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-8844-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:39:45.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9601" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9601-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:45.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting the auto-created API token Jan 3 14:39:45.787: INFO: created pod pod-service-account-defaultsa Jan 3 14:39:45.788: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 3 14:39:45.793: INFO: created pod pod-service-account-mountsa Jan 3 14:39:45.793: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 3 14:39:45.802: INFO: created pod pod-service-account-nomountsa Jan 3 14:39:45.802: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 3 14:39:45.809: INFO: created pod pod-service-account-defaultsa-mountspec Jan 3 14:39:45.809: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 3 14:39:45.818: INFO: created pod pod-service-account-mountsa-mountspec Jan 3 14:39:45.819: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 3 14:39:45.826: INFO: created pod pod-service-account-nomountsa-mountspec Jan 3 14:39:45.826: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 3 14:39:45.839: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 3 14:39:45.839: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 3 14:39:45.855: INFO: created pod pod-service-account-mountsa-nomountspec Jan 3 14:39:45.855: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 3 14:39:45.889: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 3 14:39:45.889: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:39:45.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9238" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:33.932: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services Jan 3 14:39:33.989: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-8590 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-8590 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-8590 I0103 14:39:34.083526 14 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8590, replica count: 2 I0103 14:39:37.134245 14 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0103 14:39:40.134506 14 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Jan 3 14:39:40.222: INFO: Creating new exec pod Jan 3 14:39:44.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8590 exec execpodl6prn -- /bin/sh -x -c nslookup nodeport-service.services-8590.svc.cluster.local' Jan 3 14:39:44.775: INFO: stderr: "+ nslookup nodeport-service.services-8590.svc.cluster.local\n" Jan 3 14:39:44.775: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-8590.svc.cluster.local\tcanonical name = externalsvc.services-8590.svc.cluster.local.\nName:\texternalsvc.services-8590.svc.cluster.local\nAddress: 10.135.2.187\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-8590, will wait for the garbage collector to delete the pods Jan 3 14:39:44.836: INFO: Deleting ReplicationController externalsvc took: 7.714348ms Jan 3 14:39:44.937: INFO: Terminating ReplicationController externalsvc pods took: 100.365705ms Jan 3 14:39:57.457: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:39:57.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8590" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:57.490: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 3 14:39:57.573: INFO: Waiting up to 5m0s for pod "pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb" in namespace "emptydir-3922" to be "Succeeded or Failed" Jan 3 14:39:57.578: INFO: Pod "pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.294539ms Jan 3 14:39:59.583: INFO: Pod "pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010430129s �[1mSTEP�[0m: Saw pod success Jan 3 14:39:59.584: INFO: Pod "pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb" satisfied condition "Succeeded or Failed" Jan 3 14:39:59.587: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:39:59.625: INFO: Waiting for pod pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb to disappear Jan 3 14:39:59.628: INFO: Pod pod-a51faa7b-0098-4caa-86f8-043d5dabd6eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:39:59.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3922" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:45.970: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-322 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 14:39:46.024: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 14:39:46.103: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:39:48.109: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:39:50.109: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:39:52.107: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:39:54.108: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:39:56.108: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:39:58.107: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:40:00.108: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 3 14:40:00.115: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:40:02.120: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 3 14:40:02.130: INFO: The status of Pod netserver-2 is Running (Ready = false) Jan 3 14:40:04.136: INFO: The status of Pod netserver-2 is Running (Ready = false) Jan 3 14:40:06.136: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 3 14:40:06.143: INFO: The status of Pod netserver-3 is Running (Ready = false) Jan 3 14:40:08.147: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 3 14:40:10.176: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 3 14:40:10.176: INFO: Going to poll 192.168.0.9 on port 8080 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:40:10.179: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.0.9:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-322 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:10.179: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:10.270: INFO: Found all 1 expected endpoints: [netserver-0] Jan 3 14:40:10.270: INFO: Going to poll 192.168.1.8 on port 8080 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:40:10.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-322 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:10.273: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:10.358: INFO: Found all 1 expected endpoints: [netserver-1] Jan 3 14:40:10.358: INFO: Going to poll 192.168.2.8 on port 8080 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:40:10.361: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-322 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:10.361: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:10.463: INFO: Found all 1 expected endpoints: [netserver-2] Jan 3 14:40:10.463: INFO: Going to poll 192.168.6.8 on port 8080 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:40:10.466: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-322 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:10.466: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:10.552: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:10.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-322" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:10.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-3b2a6a11-d001-4847-bd16-f3a291e49098 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 3 14:40:10.619: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7" in namespace "configmap-753" to be "Succeeded or Failed" Jan 3 14:40:10.622: INFO: Pod "pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.654052ms Jan 3 14:40:12.626: INFO: Pod "pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007704568s �[1mSTEP�[0m: Saw pod success Jan 3 14:40:12.626: INFO: Pod "pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7" satisfied condition "Succeeded or Failed" Jan 3 14:40:12.629: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:40:12.657: INFO: Waiting for pod pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7 to disappear Jan 3 14:40:12.661: INFO: Pod pod-configmaps-8cac65e1-715b-44f7-a5d8-acd46db62be7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:12.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-753" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":57,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:12.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-c026910f-10a5-4d2e-87c2-94b03a6ca198 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 3 14:40:12.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac" in namespace "configmap-135" to be "Succeeded or Failed" Jan 3 14:40:12.742: INFO: Pod "pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.656342ms Jan 3 14:40:14.746: INFO: Pod "pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007792735s �[1mSTEP�[0m: Saw pod success Jan 3 14:40:14.746: INFO: Pod "pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac" satisfied condition "Succeeded or Failed" Jan 3 14:40:14.749: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:40:14.763: INFO: Waiting for pod pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac to disappear Jan 3 14:40:14.766: INFO: Pod pod-configmaps-8ba90f9a-d6cc-42b2-b397-604eadb44aac no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:14.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-135" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:59.704: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:16.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1871" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:16.837: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:40:17.405: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:40:20.430: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:20.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8219" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8219-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:20.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 3 14:40:20.724: INFO: Waiting up to 5m0s for pod "downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae" in namespace "downward-api-7717" to be "Succeeded or Failed" Jan 3 14:40:20.728: INFO: Pod "downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21568ms Jan 3 14:40:22.732: INFO: Pod "downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008331073s Jan 3 14:40:24.736: INFO: Pod "downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012294972s �[1mSTEP�[0m: Saw pod success Jan 3 14:40:24.736: INFO: Pod "downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae" satisfied condition "Succeeded or Failed" Jan 3 14:40:24.740: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:40:24.767: INFO: Waiting for pod downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae to disappear Jan 3 14:40:24.771: INFO: Pod downward-api-8699f233-eb37-48b5-b3cd-879ce7a2aeae no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:24.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7717" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:24.826: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod �[1mSTEP�[0m: Creating hostNetwork=true pod �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 3 14:40:28.913: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:28.913: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:28.995: INFO: Exec stderr: "" Jan 3 14:40:28.995: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:28.995: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.094: INFO: Exec stderr: "" Jan 3 14:40:29.094: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.094: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.201: INFO: Exec stderr: "" Jan 3 14:40:29.201: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.201: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.287: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 3 14:40:29.287: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.287: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.382: INFO: Exec stderr: "" Jan 3 14:40:29.382: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.382: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.472: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 3 14:40:29.472: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.472: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.574: INFO: Exec stderr: "" Jan 3 14:40:29.574: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.574: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.667: INFO: Exec stderr: "" Jan 3 14:40:29.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.667: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.776: INFO: Exec stderr: "" Jan 3 14:40:29.776: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8832 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:40:29.776: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:40:29.882: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-8832" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":87,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:29.919: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:29.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2270" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":7,"skipped":98,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:30.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-999e6576-17db-4cd3-ab0f-2d659c2562f7 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:40:30.134: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40" in namespace "projected-4705" to be "Succeeded or Failed" Jan 3 14:40:30.138: INFO: Pod "pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40": Phase="Pending", Reason="", readiness=false. Elapsed: 3.850303ms Jan 3 14:40:32.142: INFO: Pod "pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007785163s �[1mSTEP�[0m: Saw pod success Jan 3 14:40:32.142: INFO: Pod "pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40" satisfied condition "Succeeded or Failed" Jan 3 14:40:32.145: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:40:32.166: INFO: Waiting for pod pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40 to disappear Jan 3 14:40:32.168: INFO: Pod pod-projected-secrets-d8aea8fe-0c9a-4202-b2ac-0d2deaaf1f40 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:40:32.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4705" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":144,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:32.180: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:40:32.576: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:40:35.600: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a mutating webhook configuration Jan 3 14:40:45.636: INFO: Waiting for webhook configuration to be ready... Jan 3 14:40:55.755: INFO: Waiting for webhook configuration to be ready... Jan 3 14:41:05.866: INFO: Waiting for webhook configuration to be ready... Jan 3 14:41:15.959: INFO: Waiting for webhook configuration to be ready... Jan 3 14:41:25.982: INFO: Waiting for webhook configuration to be ready... Jan 3 14:41:25.983: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.17() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:526 +0x407 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00270de00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00270de00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00270de00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:41:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1313" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1313-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [53.945 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mpatching/updating a mutating webhook should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:41:25.983: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:526 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":8,"skipped":144,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:41:26.136: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:41:28.098: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 3 14:41:30.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353688, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353688, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353688, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353688, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:41:33.151: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:41:33.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7981" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7981-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":9,"skipped":144,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:41:33.459: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:41:34.446: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:41:37.480: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:41:37.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4721" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4721-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":10,"skipped":154,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:40:14.803: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-8810 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a new StatefulSet Jan 3 14:40:14.856: INFO: Found 0 stateful pods, waiting for 3 Jan 3 14:40:24.862: INFO: Found 2 stateful pods, waiting for 3 Jan 3 14:40:34.862: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 3 14:40:34.862: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 3 14:40:34.862: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 3 14:40:34.893: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 3 14:40:44.951: INFO: Updating stateful set ss2 Jan 3 14:40:44.965: INFO: Waiting for Pod statefulset-8810/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 3 14:40:54.976: INFO: Waiting for Pod statefulset-8810/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 3 14:41:05.057: INFO: Found 2 stateful pods, waiting for 3 Jan 3 14:41:15.063: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 3 14:41:15.064: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 3 14:41:15.064: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 3 14:41:15.099: INFO: Updating stateful set ss2 Jan 3 14:41:15.110: INFO: Waiting for Pod statefulset-8810/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 3 14:41:25.148: INFO: Updating stateful set ss2 Jan 3 14:41:25.167: INFO: Waiting for StatefulSet statefulset-8810/ss2 to complete update Jan 3 14:41:25.168: INFO: Waiting for Pod statefulset-8810/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 3 14:41:35.191: INFO: Waiting for StatefulSet statefulset-8810/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 3 14:41:45.181: INFO: Deleting all statefulset in ns statefulset-8810 Jan 3 14:41:45.187: INFO: Scaling statefulset ss2 to 0 Jan 3 14:42:15.215: INFO: Waiting for statefulset status.replicas updated to 0 Jan 3 14:42:15.220: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:15.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8810" for this suite. �[32m• [SLOW TEST:120.458 seconds]�[0m [sig-apps] StatefulSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should perform canary updates and phased rolling updates of template modifications [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":6,"skipped":88,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:15.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-28614aeb-4932-4a2c-8f65-fdbd1c39aa3f �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:42:15.343: INFO: Waiting up to 5m0s for pod "pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700" in namespace "secrets-7092" to be "Succeeded or Failed" Jan 3 14:42:15.348: INFO: Pod "pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612028ms Jan 3 14:42:17.354: INFO: Pod "pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010686493s �[1mSTEP�[0m: Saw pod success Jan 3 14:42:17.354: INFO: Pod "pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700" satisfied condition "Succeeded or Failed" Jan 3 14:42:17.359: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700 container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:42:17.408: INFO: Waiting for pod pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700 to disappear Jan 3 14:42:17.418: INFO: Pod pod-secrets-a4f7151c-0b82-4bee-920f-4d9119949700 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:17.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7092" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":89,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:33.966: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion Jan 3 14:39:34.007: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 3 14:41:34.606: INFO: Successfully updated pod "var-expansion-9747b7e7-4ea1-4f7f-9802-901c3d0ba1e1" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 3 14:41:36.664: INFO: Deleting pod "var-expansion-9747b7e7-4ea1-4f7f-9802-901c3d0ba1e1" in namespace "var-expansion-9099" Jan 3 14:41:36.677: INFO: Wait up to 5m0s for pod "var-expansion-9747b7e7-4ea1-4f7f-9802-901c3d0ba1e1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:18.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9099" for this suite. �[32m• [SLOW TEST:164.747 seconds]�[0m [k8s.io] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:18.800: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 3 14:42:18.907: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3757 c99e4c3d-9c4c-4ff9-875e-464c17bd5b0e 4309 0 2023-01-03 14:42:18 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-03 14:42:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 3 14:42:18.908: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3757 c99e4c3d-9c4c-4ff9-875e-464c17bd5b0e 4311 0 2023-01-03 14:42:18 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-03 14:42:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:18.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-3757" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":72,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:18.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-da3c4aeb-dac3-44a5-b2b8-327d91e42fca �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:42:19.015: INFO: Waiting up to 5m0s for pod "pod-secrets-b565f8de-5996-4424-be0c-c225277fa261" in namespace "secrets-4007" to be "Succeeded or Failed" Jan 3 14:42:19.023: INFO: Pod "pod-secrets-b565f8de-5996-4424-be0c-c225277fa261": Phase="Pending", Reason="", readiness=false. Elapsed: 7.766821ms Jan 3 14:42:21.030: INFO: Pod "pod-secrets-b565f8de-5996-4424-be0c-c225277fa261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014533682s �[1mSTEP�[0m: Saw pod success Jan 3 14:42:21.030: INFO: Pod "pod-secrets-b565f8de-5996-4424-be0c-c225277fa261" satisfied condition "Succeeded or Failed" Jan 3 14:42:21.037: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-secrets-b565f8de-5996-4424-be0c-c225277fa261 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:42:21.078: INFO: Waiting for pod pod-secrets-b565f8de-5996-4424-be0c-c225277fa261 to disappear Jan 3 14:42:21.088: INFO: Pod pod-secrets-b565f8de-5996-4424-be0c-c225277fa261 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4007" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:17.480: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:28.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-2049" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":101,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:28.683: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 3 14:42:28.748: INFO: Waiting up to 5m0s for pod "downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f" in namespace "downward-api-3789" to be "Succeeded or Failed" Jan 3 14:42:28.751: INFO: Pod "downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377482ms Jan 3 14:42:30.757: INFO: Pod "downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009716048s �[1mSTEP�[0m: Saw pod success Jan 3 14:42:30.758: INFO: Pod "downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f" satisfied condition "Succeeded or Failed" Jan 3 14:42:30.763: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:42:30.793: INFO: Waiting for pod downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f to disappear Jan 3 14:42:30.796: INFO: Pod downward-api-a98b4798-4802-471b-a8e5-5e6b870fde4f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3789" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":122,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:30.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:42:31.924: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:42:34.966: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:35.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1697" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1697-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":10,"skipped":123,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:35.290: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:42:35.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa" in namespace "projected-9896" to be "Succeeded or Failed" Jan 3 14:42:35.364: INFO: Pod "downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.152564ms Jan 3 14:42:37.370: INFO: Pod "downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011890805s �[1mSTEP�[0m: Saw pod success Jan 3 14:42:37.370: INFO: Pod "downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa" satisfied condition "Succeeded or Failed" Jan 3 14:42:37.376: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:42:37.410: INFO: Waiting for pod downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa to disappear Jan 3 14:42:37.414: INFO: Pod downwardapi-volume-41789471-cc4e-4999-8b76-46a1922178fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:37.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9896" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":145,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:41:37.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:41:37.868: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating first CR Jan 3 14:41:38.763: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:38Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:41:38Z]] name:name1 resourceVersion:4125 uid:d92a24e8-eb33-4427-837f-5fb54470960e] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Creating second CR Jan 3 14:41:48.774: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:48Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:41:48Z]] name:name2 resourceVersion:4188 uid:951b0ecc-4c08-4fb6-b8bf-ea9e66d87e69] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying first CR Jan 3 14:41:58.785: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:41:58Z]] name:name1 resourceVersion:4215 uid:d92a24e8-eb33-4427-837f-5fb54470960e] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying second CR Jan 3 14:42:08.796: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:48Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:42:08Z]] name:name2 resourceVersion:4235 uid:951b0ecc-4c08-4fb6-b8bf-ea9e66d87e69] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting first CR Jan 3 14:42:18.812: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:41:58Z]] name:name1 resourceVersion:4301 uid:d92a24e8-eb33-4427-837f-5fb54470960e] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting second CR Jan 3 14:42:28.824: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-03T14:41:48Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-03T14:42:08Z]] name:name2 resourceVersion:4540 uid:951b0ecc-4c08-4fb6-b8bf-ea9e66d87e69] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:39.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-watch-7555" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":11,"skipped":156,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:39.367: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:42:39.418: INFO: Creating deployment "webserver-deployment" Jan 3 14:42:39.427: INFO: Waiting for observed generation 1 Jan 3 14:42:41.460: INFO: Waiting for all required pods to come up Jan 3 14:42:41.509: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 3 14:42:49.559: INFO: Waiting for deployment "webserver-deployment" to complete Jan 3 14:42:49.571: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 3 14:42:49.586: INFO: Updating deployment webserver-deployment Jan 3 14:42:49.586: INFO: Waiting for observed generation 2 Jan 3 14:42:51.599: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 3 14:42:51.604: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 3 14:42:51.610: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 3 14:42:51.623: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 3 14:42:51.623: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 3 14:42:51.627: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 3 14:42:51.640: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 3 14:42:51.640: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 3 14:42:51.657: INFO: Updating deployment webserver-deployment Jan 3 14:42:51.657: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 3 14:42:51.667: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 3 14:42:51.673: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 3 14:42:51.697: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9072 1aca5947-3c91-4d7b-b550-ecd09d95e16d 4946 3 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000834aa8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2023-01-03 14:42:49 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-03 14:42:51 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 3 14:42:51.754: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9072 0d0cb51e-fb4d-47aa-8533-7a191a825619 4940 3 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1aca5947-3c91-4d7b-b550-ecd09d95e16d 0xc000834e67 0xc000834e68}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1aca5947-3c91-4d7b-b550-ecd09d95e16d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000834ee8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:42:51.754: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 3 14:42:51.754: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9072 11c05c63-a3b1-4105-b9ea-283f4235be78 4937 3 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1aca5947-3c91-4d7b-b550-ecd09d95e16d 0xc000834f57 0xc000834f58}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:42:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1aca5947-3c91-4d7b-b550-ecd09d95e16d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000834fc8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:42:51.772: INFO: Pod "webserver-deployment-795d758f88-4gpv6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4gpv6 webserver-deployment-795d758f88- deployment-9072 959b2dd5-e954-4f6c-8e36-82464ffce5ed 4950 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467100 0xc001467101}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.773: INFO: Pod "webserver-deployment-795d758f88-85b77" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-85b77 webserver-deployment-795d758f88- deployment-9072 74bea479-4300-4cb8-9612-3b04da0e890a 4930 0 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467240 0xc001467241}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.27,StartTime:2023-01-03 14:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.773: INFO: Pod "webserver-deployment-795d758f88-dkk8s" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dkk8s webserver-deployment-795d758f88- deployment-9072 4097cba4-3ea2-418b-9729-1a85022f7d56 4933 0 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467400 0xc001467401}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.21,StartTime:2023-01-03 14:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.774: INFO: Pod "webserver-deployment-795d758f88-h9knc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h9knc webserver-deployment-795d758f88- deployment-9072 173d3a0d-1b25-4a77-b155-1f0eed185aa8 4917 0 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc0014675e0 0xc0014675e1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.17,StartTime:2023-01-03 14:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.774: INFO: Pod "webserver-deployment-795d758f88-qkrd6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qkrd6 webserver-deployment-795d758f88- deployment-9072 dbbd5f3b-4e6c-41c9-b61f-7621d8b01e4f 4923 0 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc0014677a0 0xc0014677a1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.14,StartTime:2023-01-03 14:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.775: INFO: Pod "webserver-deployment-795d758f88-vxffb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vxffb webserver-deployment-795d758f88- deployment-9072 46342361-d5f9-4050-8655-e1e2cc8738a2 4958 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467980 0xc001467981}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.775: INFO: Pod "webserver-deployment-795d758f88-x99gm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x99gm webserver-deployment-795d758f88- deployment-9072 a6e7991d-62b1-43a2-bada-131c0715d73c 4955 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467a97 0xc001467a98}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.776: INFO: Pod "webserver-deployment-795d758f88-xhmcs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xhmcs webserver-deployment-795d758f88- deployment-9072 392782db-94a5-4eec-b414-354d513ef273 4927 0 2023-01-03 14:42:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0d0cb51e-fb4d-47aa-8533-7a191a825619 0xc001467bc7 0xc001467bc8}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d0cb51e-fb4d-47aa-8533-7a191a825619\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.20,StartTime:2023-01-03 14:42:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.777: INFO: Pod "webserver-deployment-dd94f59b7-24jpf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-24jpf webserver-deployment-dd94f59b7- deployment-9072 1eb5030d-b9dc-4c38-b3c9-953a446bee31 4956 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc001467da0 0xc001467da1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-03 14:42:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.778: INFO: Pod "webserver-deployment-dd94f59b7-26g2g" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-26g2g webserver-deployment-dd94f59b7- deployment-9072 89ce7ba4-9263-4ec6-a9d0-b821de4e37f3 4788 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc001467f10 0xc001467f11}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.12,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2eb226fa6e5fa125953c7b2437481b0030452167cd00f5217f47675a0fe40cf7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.779: INFO: Pod "webserver-deployment-dd94f59b7-4dk8l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4dk8l webserver-deployment-dd94f59b7- deployment-9072 e32299e7-2d48-4f39-8ef3-2fb81d425f7e 4784 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d40a0 0xc0038d40a1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.15,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cda04f60ae0f640e4ceeb3b68bc5d22f0ba4a73e00370d7131a4aeee55908200,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.779: INFO: Pod "webserver-deployment-dd94f59b7-5ps7v" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5ps7v webserver-deployment-dd94f59b7- deployment-9072 aaadde73-d3cf-482a-b308-da8370d91c40 4954 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4230 0xc0038d4231}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.780: INFO: Pod "webserver-deployment-dd94f59b7-95wvr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-95wvr webserver-deployment-dd94f59b7- deployment-9072 5982c064-5c3c-42e1-b2e8-e4fe7b8e02f2 4800 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4337 0xc0038d4338}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.24,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://db0c6b6933da9da6e7c2b62236f1a72926d6dd3c4d7baa2dd74aee0bedfcfb82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.781: INFO: Pod "webserver-deployment-dd94f59b7-9f5wb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9f5wb webserver-deployment-dd94f59b7- deployment-9072 9afb2a24-a0d2-424f-a0f5-40e45c98335e 4791 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d44d0 0xc0038d44d1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.13,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ff97417c871b45f8d7026aa902022768df2ecc3bba425cb949139cca53247d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.781: INFO: Pod "webserver-deployment-dd94f59b7-ftqqh" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ftqqh webserver-deployment-dd94f59b7- deployment-9072 4475231b-3a6b-4e48-b226-e4c273988247 4798 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4670 0xc0038d4671}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.25,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://35f756e53be54456d2657c190f068f3ed68844a6fa6798915ae068d26771e796,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.783: INFO: Pod "webserver-deployment-dd94f59b7-jks96" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jks96 webserver-deployment-dd94f59b7- deployment-9072 19ba3f31-d6dc-48d0-96fc-763522a5e7a5 4782 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4800 0xc0038d4801}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.16,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3ac255bbb186dfa4a9342ed6e086ba1a010275a31132bf7421789906cec8bfe8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.783: INFO: Pod "webserver-deployment-dd94f59b7-qlpnk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qlpnk webserver-deployment-dd94f59b7- deployment-9072 6d5ae01a-f71c-4489-83cd-cc7d5ac3b16a 4957 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d49a0 0xc0038d49a1}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.783: INFO: Pod "webserver-deployment-dd94f59b7-qx6qk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qx6qk webserver-deployment-dd94f59b7- deployment-9072 897581c8-16d5-4104-afff-f2d322c92fb0 4842 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4ab7 0xc0038d4ab8}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.19,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1ba34bf47ca541f4a0634f02baaeb59c4c5f7e9764d902b2113da2842c70b7e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.784: INFO: Pod "webserver-deployment-dd94f59b7-rtnk6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rtnk6 webserver-deployment-dd94f59b7- deployment-9072 953e4c77-d715-44af-8556-6cef18f8d9cb 4949 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4c50 0xc0038d4c51}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.784: INFO: Pod "webserver-deployment-dd94f59b7-twbwj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-twbwj webserver-deployment-dd94f59b7- deployment-9072 0826a985-3828-4db2-bb9d-47a8e3935ca0 4795 0 2023-01-03 14:42:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4d70 0xc0038d4d71}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:42:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.26,StartTime:2023-01-03 14:42:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:42:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eb96a67ad540699525f16d1598f033b8ae4f2a1c25277ad855c69ecafe3026af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.784: INFO: Pod "webserver-deployment-dd94f59b7-v74js" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v74js webserver-deployment-dd94f59b7- deployment-9072 7d55dc5e-b1a8-4206-9e0c-629d6126109c 4953 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d4f10 0xc0038d4f11}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.784: INFO: Pod "webserver-deployment-dd94f59b7-x5wrj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x5wrj webserver-deployment-dd94f59b7- deployment-9072 5b45c8b5-29ea-466e-8502-1049603f6061 4948 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d5027 0xc0038d5028}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:42:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 3 14:42:51.784: INFO: Pod "webserver-deployment-dd94f59b7-xdmkk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xdmkk webserver-deployment-dd94f59b7- deployment-9072 38119aa7-6ce1-4b03-9808-7d6522d012fa 4952 0 2023-01-03 14:42:51 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 11c05c63-a3b1-4105-b9ea-283f4235be78 0xc0038d5140 0xc0038d5141}] [] [{kube-controller-manager Update v1 2023-01-03 14:42:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11c05c63-a3b1-4105-b9ea-283f4235be78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5knq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5knq9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5knq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:42:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9072" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":156,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:37.502: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod Jan 3 14:42:37.555: INFO: PodSpec: initContainers in spec.initContainers Jan 3 14:43:30.773: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2cbd1f07-c0da-461f-9084-b0fdf10d5e08", GenerateName:"", Namespace:"init-container-7367", SelfLink:"", UID:"ede6a362-1b9e-4149-90a8-797fdd6a1c43", ResourceVersion:"5315", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63808353757, loc:(*time.Location)(0x798e100)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"555656845"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002fba9e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002fbaa00)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002fbaa20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002fbaa40)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f67qd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003150900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f67qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f67qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f67qd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00300db98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f32d90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00300dc10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00300dc30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00300dc38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00300dc3c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003445e90), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353757, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353757, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353757, loc:(*time.Location)(0x798e100)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353757, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.1.23", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.1.23"}}, StartTime:(*v1.Time)(0xc002fbaa60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f32e70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001f32ee0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0b96d005f2562662256618d7168ee7e79c7006b429188bc3f46f1d95600b096e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002fbaaa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002fbaa80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00300dcbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:43:30.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-7367" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":12,"skipped":165,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:43:30.879: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:43:30.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7518 version' Jan 3 14:43:31.093: INFO: stderr: "" Jan 3 14:43:31.093: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.15\", GitCommit:\"8f1e5bf0b9729a899b8df86249b56e2c74aebc55\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:27:39Z\", GoVersion:\"go1.15.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.15\", GitCommit:\"8f1e5bf0b9729a899b8df86249b56e2c74aebc55\", GitTreeState:\"clean\", BuildDate:\"2022-10-26T15:31:34Z\", GoVersion:\"go1.15.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:43:31.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7518" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":13,"skipped":202,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:43:31.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:43:33.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-7305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":14,"skipped":205,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:51.935: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-ff8dae56-5b18-45aa-9eb2-d016025e45c6 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-a07d0b02-5bf7-4586-849c-e88919677eca �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting secret s-test-opt-del-ff8dae56-5b18-45aa-9eb2-d016025e45c6 �[1mSTEP�[0m: Updating secret s-test-opt-upd-a07d0b02-5bf7-4586-849c-e88919677eca �[1mSTEP�[0m: Creating secret with name s-test-opt-create-b20d6acf-39b4-4788-a8a9-fc54210e4c1e �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:02.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1907" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":175,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:02.756: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 3 14:44:05.382: INFO: Successfully updated pod "annotationupdate8849a816-9095-4e40-963a-bbd36a0c6dc5" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:09.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5913" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":198,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:09.479: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:44:09.536: INFO: Creating ReplicaSet my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5 Jan 3 14:44:09.549: INFO: Pod name my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5: Found 0 pods out of 1 Jan 3 14:44:14.554: INFO: Pod name my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5: Found 1 pods out of 1 Jan 3 14:44:14.555: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5" is running Jan 3 14:44:14.559: INFO: Pod "my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5-zt5tx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:44:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:44:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:44:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:44:09 +0000 UTC Reason: Message:}]) Jan 3 14:44:14.560: INFO: Trying to dial the pod Jan 3 14:44:19.584: INFO: Controller my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5: Got expected result from replica 1 [my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5-zt5tx]: "my-hostname-basic-e1bf7cf4-c180-4459-b691-4ca14851b8b5-zt5tx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:19.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-8247" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":15,"skipped":215,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:19.639: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override command Jan 3 14:44:19.709: INFO: Waiting up to 5m0s for pod "client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9" in namespace "containers-7400" to be "Succeeded or Failed" Jan 3 14:44:19.720: INFO: Pod "client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867727ms Jan 3 14:44:21.730: INFO: Pod "client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020575445s �[1mSTEP�[0m: Saw pod success Jan 3 14:44:21.730: INFO: Pod "client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9" satisfied condition "Succeeded or Failed" Jan 3 14:44:21.735: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:44:21.771: INFO: Waiting for pod client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9 to disappear Jan 3 14:44:21.781: INFO: Pod client-containers-f88bac31-0a44-4710-8546-5045fa7e8de9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:21.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-7400" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":229,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:21.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 3 14:44:23.216: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 3 14:44:23.237: INFO: waiting for watch events with expected annotations Jan 3 14:44:23.237: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:23.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-8728" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":17,"skipped":278,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:23.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:44:23.447: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 3 14:44:27.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6019 --namespace=crd-publish-openapi-6019 create -f -' Jan 3 14:44:28.919: INFO: stderr: "" Jan 3 14:44:28.919: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 3 14:44:28.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6019 --namespace=crd-publish-openapi-6019 delete e2e-test-crd-publish-openapi-2006-crds test-cr' Jan 3 14:44:29.120: INFO: stderr: "" Jan 3 14:44:29.121: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 3 14:44:29.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6019 --namespace=crd-publish-openapi-6019 apply -f -' Jan 3 14:44:29.679: INFO: stderr: "" Jan 3 14:44:29.679: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 3 14:44:29.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6019 --namespace=crd-publish-openapi-6019 delete e2e-test-crd-publish-openapi-2006-crds test-cr' Jan 3 14:44:29.904: INFO: stderr: "" Jan 3 14:44:29.904: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 3 14:44:29.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6019 explain e2e-test-crd-publish-openapi-2006-crds' Jan 3 14:44:31.509: INFO: stderr: "" Jan 3 14:44:31.509: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2006-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:34.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6019" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":18,"skipped":281,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:34.735: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-3c4e209e-bd41-4370-a8ac-0bc200e6fe09 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 3 14:44:34.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0" in namespace "configmap-5938" to be "Succeeded or Failed" Jan 3 14:44:34.830: INFO: Pod "pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.754041ms Jan 3 14:44:36.837: INFO: Pod "pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020654146s Jan 3 14:44:38.844: INFO: Pod "pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027845869s �[1mSTEP�[0m: Saw pod success Jan 3 14:44:38.844: INFO: Pod "pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0" satisfied condition "Succeeded or Failed" Jan 3 14:44:38.850: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:44:38.883: INFO: Waiting for pod pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0 to disappear Jan 3 14:44:38.888: INFO: Pod pod-configmaps-e6d32a14-f139-4d41-96dc-63ce8423caf0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:38.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5938" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":284,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:39:34.014: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc Jan 3 14:39:34.094: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics W0103 14:39:40.154615 16 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 3 14:44:40.165: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:40.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5888" for this suite. �[32m• [SLOW TEST:306.181 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":1,"skipped":49,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:40.253: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:44:40.402: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 3 14:44:45.408: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 3 14:44:45.408: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 3 14:44:45.447: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8754 462b6917-e80a-4ae1-b9e1-0a5be8a2597b 5797 1 2023-01-03 14:44:45 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-03 14:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000b27218 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 3 14:44:45.450: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 3 14:44:45.451: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 3 14:44:45.451: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8754 099ec563-834c-4711-8c4d-62dbca09f9a9 5801 1 2023-01-03 14:44:40 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 462b6917-e80a-4ae1-b9e1-0a5be8a2597b 0xc00094e07f 0xc00094e0d0}] [] [{e2e.test Update apps/v1 2023-01-03 14:44:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"462b6917-e80a-4ae1-b9e1-0a5be8a2597b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00094e2e8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:44:45.457: INFO: Pod "test-cleanup-controller-pq2pv" is available: &Pod{ObjectMeta:{test-cleanup-controller-pq2pv test-cleanup-controller- deployment-8754 c568d038-b9e2-47c6-8d5f-74bc1f6dec63 5755 0 2023-01-03 14:44:40 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 099ec563-834c-4711-8c4d-62dbca09f9a9 0xc00094e80f 0xc00094e820}] [] [{kube-controller-manager Update v1 2023-01-03 14:44:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"099ec563-834c-4711-8c4d-62dbca09f9a9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:44:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vh5dt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vh5dt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vh5dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-erlai2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:44:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:44:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.28,StartTime:2023-01-03 14:44:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:44:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c9eabbdba741a8d6eb48e49609834a5b62dc46c12c38468a03ea895694668807,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:45.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-8754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":64,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:45.518: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 3 14:44:45.610: INFO: Waiting up to 5m0s for pod "pod-8a8aee76-737b-4df5-8303-346c87b8c4a7" in namespace "emptydir-4813" to be "Succeeded or Failed" Jan 3 14:44:45.622: INFO: Pod "pod-8a8aee76-737b-4df5-8303-346c87b8c4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.330532ms Jan 3 14:44:47.627: INFO: Pod "pod-8a8aee76-737b-4df5-8303-346c87b8c4a7": Phase="Running", Reason="", readiness=true. Elapsed: 2.017009555s Jan 3 14:44:49.631: INFO: Pod "pod-8a8aee76-737b-4df5-8303-346c87b8c4a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021691605s �[1mSTEP�[0m: Saw pod success Jan 3 14:44:49.631: INFO: Pod "pod-8a8aee76-737b-4df5-8303-346c87b8c4a7" satisfied condition "Succeeded or Failed" Jan 3 14:44:49.635: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 pod pod-8a8aee76-737b-4df5-8303-346c87b8c4a7 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:44:49.663: INFO: Waiting for pod pod-8a8aee76-737b-4df5-8303-346c87b8c4a7 to disappear Jan 3 14:44:49.666: INFO: Pod pod-8a8aee76-737b-4df5-8303-346c87b8c4a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4813" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:42:21.122: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-1017 Jan 3 14:42:23.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 3 14:42:23.563: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 3 14:42:23.563: INFO: stdout: "iptables" Jan 3 14:42:23.563: INFO: proxyMode: iptables Jan 3 14:42:23.580: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 3 14:42:23.587: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-clusterip-timeout in namespace services-1017 �[1mSTEP�[0m: creating replication controller affinity-clusterip-timeout in namespace services-1017 I0103 14:42:23.640120 20 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1017, replica count: 3 I0103 14:42:26.690766 20 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 14:42:26.704: INFO: Creating new exec pod Jan 3 14:42:29.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:32.062: INFO: rc: 1 Jan 3 14:42:32.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:33.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:35.404: INFO: rc: 1 Jan 3 14:42:35.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:38.594: INFO: rc: 1 Jan 3 14:42:38.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:39.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:41.469: INFO: rc: 1 Jan 3 14:42:41.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:42.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:44.442: INFO: rc: 1 Jan 3 14:42:44.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:47.442: INFO: rc: 1 Jan 3 14:42:47.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:48.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:50.539: INFO: rc: 1 Jan 3 14:42:50.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:51.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:53.472: INFO: rc: 1 Jan 3 14:42:53.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:54.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:56.531: INFO: rc: 1 Jan 3 14:42:56.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:42:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:42:59.520: INFO: rc: 1 Jan 3 14:42:59.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:00.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:02.622: INFO: rc: 1 Jan 3 14:43:02.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:03.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:05.434: INFO: rc: 1 Jan 3 14:43:05.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:06.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:08.374: INFO: rc: 1 Jan 3 14:43:08.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:09.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:11.400: INFO: rc: 1 Jan 3 14:43:11.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:12.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:14.393: INFO: rc: 1 Jan 3 14:43:14.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:15.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:17.410: INFO: rc: 1 Jan 3 14:43:17.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:18.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:20.403: INFO: rc: 1 Jan 3 14:43:20.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:21.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:23.425: INFO: rc: 1 Jan 3 14:43:23.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:24.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:26.406: INFO: rc: 1 Jan 3 14:43:26.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:27.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:29.379: INFO: rc: 1 Jan 3 14:43:29.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:30.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:32.400: INFO: rc: 1 Jan 3 14:43:32.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:33.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:35.425: INFO: rc: 1 Jan 3 14:43:35.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:38.409: INFO: rc: 1 Jan 3 14:43:38.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:39.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:41.501: INFO: rc: 1 Jan 3 14:43:41.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:42.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:44.384: INFO: rc: 1 Jan 3 14:43:44.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:45.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:47.380: INFO: rc: 1 Jan 3 14:43:47.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:48.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:50.423: INFO: rc: 1 Jan 3 14:43:50.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:51.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:53.416: INFO: rc: 1 Jan 3 14:43:53.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:54.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:56.426: INFO: rc: 1 Jan 3 14:43:56.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:43:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:43:59.402: INFO: rc: 1 Jan 3 14:43:59.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:00.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:02.381: INFO: rc: 1 Jan 3 14:44:02.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:03.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:05.427: INFO: rc: 1 Jan 3 14:44:05.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:06.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:08.380: INFO: rc: 1 Jan 3 14:44:08.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:09.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:11.412: INFO: rc: 1 Jan 3 14:44:11.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:12.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:14.392: INFO: rc: 1 Jan 3 14:44:14.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:15.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:17.369: INFO: rc: 1 Jan 3 14:44:17.369: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:18.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:20.432: INFO: rc: 1 Jan 3 14:44:20.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:21.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:23.474: INFO: rc: 1 Jan 3 14:44:23.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:24.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:26.465: INFO: rc: 1 Jan 3 14:44:26.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:27.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:29.509: INFO: rc: 1 Jan 3 14:44:29.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:30.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:32.441: INFO: rc: 1 Jan 3 14:44:32.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:32.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:34.796: INFO: rc: 1 Jan 3 14:44:34.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1017 exec execpod-affinitycld49 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-timeout 80 nc: connect to affinity-clusterip-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 3 14:44:34.796: FAIL: Unexpected error: <*errors.errorString | 0xc001846900>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0012166e0, 0x56112e0, 0xc00171d1e0, 0xc000e1cc80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3365 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2421 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00248bc80, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 3 14:44:34.797: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-timeout in namespace services-1017, will wait for the garbage collector to delete the pods Jan 3 14:44:34.890: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 15.325583ms Jan 3 14:44:35.390: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.64923ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:50.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1017" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [149.067 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:44:34.796: Unexpected error: <*errors.errorString | 0xc001846900>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3365 �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:49.693: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:44:53.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6640" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":82,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:38.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6027 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6027;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6027 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6027;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6027.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6027.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6027.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6027.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6027.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6027.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6027.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 72.188.131.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.131.188.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.188.131.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.131.188.72_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6027 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6027;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6027 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6027;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6027.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6027.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6027.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6027.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6027.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6027.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6027.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6027.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6027.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 72.188.131.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.131.188.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.188.131.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.131.188.72_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:44:51.115: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.120: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.129: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.143: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.148: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.192: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.198: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.205: INFO: Unable to read jessie_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.216: INFO: Unable to read jessie_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:51.270: INFO: Lookups using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6027 wheezy_tcp@dns-test-service.dns-6027 wheezy_udp@dns-test-service.dns-6027.svc wheezy_tcp@dns-test-service.dns-6027.svc wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6027 jessie_tcp@dns-test-service.dns-6027 jessie_udp@dns-test-service.dns-6027.svc jessie_tcp@dns-test-service.dns-6027.svc jessie_udp@_http._tcp.dns-test-service.dns-6027.svc jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc] Jan 3 14:44:56.275: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.281: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.292: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.296: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.301: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.305: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.309: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.337: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.344: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.347: INFO: Unable to read jessie_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.351: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.356: INFO: Unable to read jessie_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.360: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.365: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.369: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:44:56.405: INFO: Lookups using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6027 wheezy_tcp@dns-test-service.dns-6027 wheezy_udp@dns-test-service.dns-6027.svc wheezy_tcp@dns-test-service.dns-6027.svc wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6027 jessie_tcp@dns-test-service.dns-6027 jessie_udp@dns-test-service.dns-6027.svc jessie_tcp@dns-test-service.dns-6027.svc jessie_udp@_http._tcp.dns-test-service.dns-6027.svc jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc] Jan 3 14:45:01.275: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.279: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.288: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.294: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.299: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.304: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.310: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.347: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.353: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.358: INFO: Unable to read jessie_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.368: INFO: Unable to read jessie_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.378: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.383: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:01.419: INFO: Lookups using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6027 wheezy_tcp@dns-test-service.dns-6027 wheezy_udp@dns-test-service.dns-6027.svc wheezy_tcp@dns-test-service.dns-6027.svc wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6027 jessie_tcp@dns-test-service.dns-6027 jessie_udp@dns-test-service.dns-6027.svc jessie_tcp@dns-test-service.dns-6027.svc jessie_udp@_http._tcp.dns-test-service.dns-6027.svc jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc] Jan 3 14:45:06.275: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.278: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.286: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.291: INFO: Unable to read wheezy_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.295: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.300: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.304: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.332: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.336: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.340: INFO: Unable to read jessie_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.346: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.350: INFO: Unable to read jessie_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.355: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.363: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:06.386: INFO: Lookups using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6027 wheezy_tcp@dns-test-service.dns-6027 wheezy_udp@dns-test-service.dns-6027.svc wheezy_tcp@dns-test-service.dns-6027.svc wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6027 jessie_tcp@dns-test-service.dns-6027 jessie_udp@dns-test-service.dns-6027.svc jessie_tcp@dns-test-service.dns-6027.svc jessie_udp@_http._tcp.dns-test-service.dns-6027.svc jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc] Jan 3 14:45:11.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.354: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.359: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.366: INFO: Unable to read jessie_udp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027 from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.377: INFO: Unable to read jessie_udp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.385: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.390: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc from pod dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e: the server could not find the requested resource (get pods dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e) Jan 3 14:45:11.425: INFO: Lookups using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-6027.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6027.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6027 jessie_tcp@dns-test-service.dns-6027 jessie_udp@dns-test-service.dns-6027.svc jessie_tcp@dns-test-service.dns-6027.svc jessie_udp@_http._tcp.dns-test-service.dns-6027.svc jessie_tcp@_http._tcp.dns-test-service.dns-6027.svc] Jan 3 14:45:16.410: INFO: DNS probes using dns-6027/dns-test-bb7831a1-c3c5-4334-942d-0783ca7a583e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:16.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6027" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":300,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:16.678: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod Jan 3 14:45:16.787: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:20.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-6290" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":21,"skipped":310,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":81,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:50.201: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-785 Jan 3 14:44:52.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 3 14:44:52.555: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 3 14:44:52.556: INFO: stdout: "iptables" Jan 3 14:44:52.556: INFO: proxyMode: iptables Jan 3 14:44:52.569: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 3 14:44:52.572: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-clusterip-timeout in namespace services-785 �[1mSTEP�[0m: creating replication controller affinity-clusterip-timeout in namespace services-785 I0103 14:44:52.603839 20 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-785, replica count: 3 I0103 14:44:55.654501 20 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 14:44:55.666: INFO: Creating new exec pod Jan 3 14:44:58.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec execpod-affinityw85b7 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 3 14:44:58.971: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jan 3 14:44:58.971: INFO: stdout: "" Jan 3 14:44:58.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec execpod-affinityw85b7 -- /bin/sh -x -c nc -zv -t -w 2 10.140.69.150 80' Jan 3 14:44:59.207: INFO: stderr: "+ nc -zv -t -w 2 10.140.69.150 80\nConnection to 10.140.69.150 80 port [tcp/http] succeeded!\n" Jan 3 14:44:59.207: INFO: stdout: "" Jan 3 14:44:59.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec execpod-affinityw85b7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.140.69.150:80/ ; done' Jan 3 14:44:59.563: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n" Jan 3 14:44:59.563: INFO: stdout: "\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n\naffinity-clusterip-timeout-j8b7n" Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Received response from host: affinity-clusterip-timeout-j8b7n Jan 3 14:44:59.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec execpod-affinityw85b7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.140.69.150:80/' Jan 3 14:44:59.784: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n" Jan 3 14:44:59.784: INFO: stdout: "affinity-clusterip-timeout-j8b7n" Jan 3 14:45:19.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-785 exec execpod-affinityw85b7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.140.69.150:80/' Jan 3 14:45:19.972: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.140.69.150:80/\n" Jan 3 14:45:19.974: INFO: stdout: "affinity-clusterip-timeout-mdt2x" Jan 3 14:45:19.974: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-timeout in namespace services-785, will wait for the garbage collector to delete the pods Jan 3 14:45:20.047: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 8.372761ms Jan 3 14:45:20.147: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.222815ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:30.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-785" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":81,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:30.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:45:30.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861" in namespace "downward-api-7464" to be "Succeeded or Failed" Jan 3 14:45:30.307: INFO: Pod "downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861": Phase="Pending", Reason="", readiness=false. Elapsed: 19.295141ms Jan 3 14:45:32.312: INFO: Pod "downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024181343s �[1mSTEP�[0m: Saw pod success Jan 3 14:45:32.312: INFO: Pod "downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861" satisfied condition "Succeeded or Failed" Jan 3 14:45:32.317: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:45:32.336: INFO: Waiting for pod downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861 to disappear Jan 3 14:45:32.339: INFO: Pod downwardapi-volume-44f52e28-ebe3-4612-a940-37d88bfcc861 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:32.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7464" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:32.385: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: retrieving the pod Jan 3 14:45:34.439: INFO: &Pod{ObjectMeta:{send-events-0357b29f-a4ef-466b-95dc-0d44cfb5ec3f events-4709 3e1391f1-90a0-4161-a9ab-576938b311a3 6318 0 2023-01-03 14:45:32 +0000 UTC <nil> <nil> map[name:foo time:419925273] map[] [] [] [{e2e.test Update v1 2023-01-03 14:45:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:45:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lwnkb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lwnkb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lwnkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.38,StartTime:2023-01-03 14:45:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:45:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://9d7560b2eb9a0402622cff14daef761fbc6f0d835c7919ef6ff129e7604c595a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} �[1mSTEP�[0m: checking for scheduler event about the pod Jan 3 14:45:36.444: INFO: Saw scheduler event for our pod. �[1mSTEP�[0m: checking for kubelet event about the pod Jan 3 14:45:38.449: INFO: Saw kubelet event for our pod. �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:38.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4709" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":6,"skipped":130,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:38.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:45:39.110: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 3 14:45:41.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353939, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353939, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353939, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353939, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:45:44.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 3 14:45:46.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-1779 attach --namespace=webhook-1779 to-be-attached-pod -i -c=container1' Jan 3 14:45:46.302: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:46.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1779" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1779-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":7,"skipped":138,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:20.252: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6167 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 14:45:20.309: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 14:45:20.385: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:45:22.389: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:24.390: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:26.389: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:28.391: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:30.389: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:32.390: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:34.389: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:36.390: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:45:38.389: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 3 14:45:38.395: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:45:40.399: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 3 14:45:40.407: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 3 14:45:40.422: INFO: The status of Pod netserver-3 is Running (Ready = false) Jan 3 14:45:42.427: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 3 14:45:44.457: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 3 14:45:44.457: INFO: Going to poll 192.168.0.23 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:45:44.460: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.23 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6167 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:45:44.460: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:45:45.550: INFO: Found all 1 expected endpoints: [netserver-0] Jan 3 14:45:45.550: INFO: Going to poll 192.168.1.36 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:45:45.554: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.36 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6167 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:45:45.554: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:45:46.644: INFO: Found all 1 expected endpoints: [netserver-1] Jan 3 14:45:46.644: INFO: Going to poll 192.168.2.32 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:45:46.649: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.32 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6167 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:45:46.649: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:45:47.729: INFO: Found all 1 expected endpoints: [netserver-2] Jan 3 14:45:47.729: INFO: Going to poll 192.168.6.20 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 3 14:45:47.733: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6167 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:45:47.733: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:45:48.813: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:48.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6167" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":312,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:46.424: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:49.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-124" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":151,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:48.872: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-33dcf426-2f65-453e-9673-812819c7da11 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:50.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6670" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":342,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:49.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:45:49.709: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 3 14:45:49.720: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 3 14:45:54.724: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 3 14:45:54.724: INFO: Creating deployment "test-rolling-update-deployment" Jan 3 14:45:54.728: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 3 14:45:54.749: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 3 14:45:56.756: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 3 14:45:56.762: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 3 14:45:56.776: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9701 383628d3-8d5e-4c99-9e35-2c85e30a3ed8 6684 1 2023-01-03 14:45:54 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-03 14:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:45:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000e13a18 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-03 14:45:54 +0000 UTC,LastTransitionTime:2023-01-03 14:45:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2023-01-03 14:45:56 +0000 UTC,LastTransitionTime:2023-01-03 14:45:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 3 14:45:56.780: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-9701 b2102f5c-8ddc-4685-80e0-50c5993578af 6673 1 2023-01-03 14:45:54 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 383628d3-8d5e-4c99-9e35-2c85e30a3ed8 0xc0028c2467 0xc0028c2468}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:45:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"383628d3-8d5e-4c99-9e35-2c85e30a3ed8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0028c24f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:45:56.780: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 3 14:45:56.780: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9701 a3b07488-bdb7-4bda-a9cc-4799df27d6fd 6682 2 2023-01-03 14:45:49 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 383628d3-8d5e-4c99-9e35-2c85e30a3ed8 0xc0028c235f 0xc0028c2370}] [] [{e2e.test Update apps/v1 2023-01-03 14:45:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:45:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"383628d3-8d5e-4c99-9e35-2c85e30a3ed8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028c2408 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:45:56.784: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-lv649" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-lv649 test-rolling-update-deployment-6b6bf9df46- deployment-9701 524a6410-59fc-4088-983e-742b1ed44c79 6672 0 2023-01-03 14:45:54 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 b2102f5c-8ddc-4685-80e0-50c5993578af 0xc0028c28e7 0xc0028c28e8}] [] [{kube-controller-manager Update v1 2023-01-03 14:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2102f5c-8ddc-4685-80e0-50c5993578af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:45:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mcjtq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mcjtq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mcjtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:45:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.41,StartTime:2023-01-03 14:45:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:45:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://a186e95c6ce3002c9e657b245d2416e33fc20f5b92753d69bf3e8f45737dd457,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9701" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":9,"skipped":223,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:50.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 3 14:45:53.526: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7" Jan 3 14:45:53.527: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7" in namespace "pods-6228" to be "terminated due to deadline exceeded" Jan 3 14:45:53.530: INFO: Pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7": Phase="Running", Reason="", readiness=true. Elapsed: 3.493685ms Jan 3 14:45:55.535: INFO: Pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7": Phase="Running", Reason="", readiness=true. Elapsed: 2.008065454s Jan 3 14:45:57.541: INFO: Pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.014059903s Jan 3 14:45:57.541: INFO: Pod "pod-update-activedeadlineseconds-a383145a-0229-4fa7-9232-f341a06a1fa7" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:45:57.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6228" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":345,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:57.629: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:45:57.673: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 3 14:45:59.705: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:00.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-9020" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":25,"skipped":371,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:00.751: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-d5d4995d-6ab0-4a36-a791-20be97ce825d �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 3 14:46:00.788: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b" in namespace "projected-8224" to be "Succeeded or Failed" Jan 3 14:46:00.791: INFO: Pod "pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.732403ms Jan 3 14:46:02.795: INFO: Pod "pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007175015s �[1mSTEP�[0m: Saw pod success Jan 3 14:46:02.795: INFO: Pod "pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b" satisfied condition "Succeeded or Failed" Jan 3 14:46:02.799: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:46:02.828: INFO: Waiting for pod pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b to disappear Jan 3 14:46:02.831: INFO: Pod pod-projected-configmaps-736a0d70-f670-4aac-8de2-57c509db772b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:02.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8224" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":392,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:02.855: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:46:02.891: INFO: Creating deployment "test-recreate-deployment" Jan 3 14:46:02.896: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 3 14:46:02.909: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 3 14:46:04.918: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 3 14:46:04.921: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 3 14:46:04.931: INFO: Updating deployment test-recreate-deployment Jan 3 14:46:04.933: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 3 14:46:05.010: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7552 19b35a11-5249-4eab-bf23-72f2ba3a88fc 6891 2 2023-01-03 14:46:02 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-03 14:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00333f228 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-03 14:46:04 +0000 UTC,LastTransitionTime:2023-01-03 14:46:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2023-01-03 14:46:04 +0000 UTC,LastTransitionTime:2023-01-03 14:46:02 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 3 14:46:05.014: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-7552 881f4e6e-5069-47b2-be61-f7f6031788e2 6888 1 2023-01-03 14:46:04 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 19b35a11-5249-4eab-bf23-72f2ba3a88fc 0xc00333f6b0 0xc00333f6b1}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19b35a11-5249-4eab-bf23-72f2ba3a88fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00333f728 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:46:05.014: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 3 14:46:05.014: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-7552 ee0d0b00-8c19-4873-b1ec-202c9c8b7a55 6880 2 2023-01-03 14:46:02 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 19b35a11-5249-4eab-bf23-72f2ba3a88fc 0xc00333f5c7 0xc00333f5c8}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19b35a11-5249-4eab-bf23-72f2ba3a88fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00333f658 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:46:05.019: INFO: Pod "test-recreate-deployment-f79dd4667-vhszl" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-vhszl test-recreate-deployment-f79dd4667- deployment-7552 38a14cda-c5ab-4ab5-a6bd-a3eb1fe023bc 6892 0 2023-01-03 14:46:04 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 881f4e6e-5069-47b2-be61-f7f6031788e2 0xc003253bd0 0xc003253bd1}] [] [{kube-controller-manager Update v1 2023-01-03 14:46:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"881f4e6e-5069-47b2-be61-f7f6031788e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:46:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gxgjs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gxgjs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gxgjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:46:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:46:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:46:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:46:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-03 14:46:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:05.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7552" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":27,"skipped":398,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:05.047: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-ccbf702d-fbf0-4f44-8f0a-81f8518e77d2 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:46:05.094: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f" in namespace "projected-2769" to be "Succeeded or Failed" Jan 3 14:46:05.098: INFO: Pod "pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.513925ms Jan 3 14:46:07.102: INFO: Pod "pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007480585s �[1mSTEP�[0m: Saw pod success Jan 3 14:46:07.102: INFO: Pod "pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f" satisfied condition "Succeeded or Failed" Jan 3 14:46:07.105: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:46:07.119: INFO: Waiting for pod pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f to disappear Jan 3 14:46:07.123: INFO: Pod pod-projected-secrets-bdc12168-fc2d-4236-a9ab-dee9d326538f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:07.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2769" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":405,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:07.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: set up a multi version CRD Jan 3 14:46:07.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5004" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":29,"skipped":433,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:22.576: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 3 14:46:23.632: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:23.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-5473" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":439,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:23.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: set up a multi version CRD Jan 3 14:46:23.716: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:39.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-936" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":31,"skipped":450,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:39.514: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-6198 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-6198 I0103 14:46:39.593268 14 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6198, replica count: 2 I0103 14:46:42.644762 14 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 14:46:42.644: INFO: Creating new exec pod Jan 3 14:46:45.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6198 exec execpodcvc6k -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 3 14:46:45.938: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 3 14:46:45.938: INFO: stdout: "" Jan 3 14:46:45.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6198 exec execpodcvc6k -- /bin/sh -x -c nc -zv -t -w 2 10.139.138.136 80' Jan 3 14:46:46.130: INFO: stderr: "+ nc -zv -t -w 2 10.139.138.136 80\nConnection to 10.139.138.136 80 port [tcp/http] succeeded!\n" Jan 3 14:46:46.130: INFO: stdout: "" Jan 3 14:46:46.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6198 exec execpodcvc6k -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 31723' Jan 3 14:46:46.308: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 31723\nConnection to 172.18.0.7 31723 port [tcp/31723] succeeded!\n" Jan 3 14:46:46.308: INFO: stdout: "" Jan 3 14:46:46.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6198 exec execpodcvc6k -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31723' Jan 3 14:46:46.501: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31723\nConnection to 172.18.0.4 31723 port [tcp/31723] succeeded!\n" Jan 3 14:46:46.501: INFO: stdout: "" Jan 3 14:46:46.501: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:46:46.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6198" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":32,"skipped":457,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:46:46.582: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 3 14:46:46.625: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:48.866: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7123" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":33,"skipped":469,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:00.503: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:00.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2161" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":34,"skipped":487,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:00.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Starting the proxy Jan 3 14:47:00.624: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6463 proxy --unix-socket=/tmp/kubectl-proxy-unix326356835/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:00.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6463" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":35,"skipped":493,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:00.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-fbe56b22-9843-44ab-bf17-469f9367776e �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 3 14:47:00.771: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851" in namespace "projected-1519" to be "Succeeded or Failed" Jan 3 14:47:00.774: INFO: Pod "pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818123ms Jan 3 14:47:02.778: INFO: Pod "pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006788224s �[1mSTEP�[0m: Saw pod success Jan 3 14:47:02.778: INFO: Pod "pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851" satisfied condition "Succeeded or Failed" Jan 3 14:47:02.781: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:47:02.798: INFO: Waiting for pod pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851 to disappear Jan 3 14:47:02.801: INFO: Pod pod-projected-configmaps-380ada8c-5d2c-47ac-b62a-af9a976a3851 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:02.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1519" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":513,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:02.896: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:47:04.975: INFO: Waiting up to 5m0s for pod "client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57" in namespace "pods-5508" to be "Succeeded or Failed" Jan 3 14:47:04.983: INFO: Pod "client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57": Phase="Pending", Reason="", readiness=false. Elapsed: 7.680381ms Jan 3 14:47:06.987: INFO: Pod "client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012237232s �[1mSTEP�[0m: Saw pod success Jan 3 14:47:06.987: INFO: Pod "client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57" satisfied condition "Succeeded or Failed" Jan 3 14:47:06.991: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:47:07.009: INFO: Waiting for pod client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57 to disappear Jan 3 14:47:07.014: INFO: Pod client-envvars-6bee70b4-eae6-4bb0-b6f8-41f37d5c0b57 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:07.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5508" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":560,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:07.101: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:09.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-9878" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":605,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:09.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-2414" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":39,"skipped":649,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:20.430: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-d1c7a7ae-65b5-4c68-9e50-d0b7d0de38a1 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:47:20.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7002" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":40,"skipped":699,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:44:53.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2063.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2063.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:48:34.593: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-2063.svc.cluster.local from pod dns-2063/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081: an error on the server ("unknown") has prevented the request from succeeding (get pods dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081) Jan 3 14:50:01.860: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-2063/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2063/pods/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00089bbe0, 0xc0035d7df8, 0xc00089bbe0, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002d40780, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc001ad2400, 0x56112e0, 0xc0016b5b80, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc001ad2400, 0xc002d40780, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 E0103 14:50:01.861106 16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 3 14:50:01.860: Unable to read wheezy_hosts@dns-querier-1 from pod dns-2063/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2063/pods/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00089bbe0, 0xc0035d7df8, 0xc00089bbe0, 0xc0035d7df8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002d40780, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc001ad2400, 0x56112e0, 0xc0016b5b80, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc001ad2400, 0xc002d40780, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc0019dd680, 0x4fc9940)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 110 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x499f1e0, 0xc0026f6100) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x499f1e0, 0xc0026f6100) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002dd4140, 0x12f, 0x77a462c, 0x7d, 0xd3, 0xc0038dd000, 0x7fb) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002dd4140, 0x12f, 0xc0035d78a0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002dd4140, 0x12f, 0xc0035d7988, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e68bfb, 0x24, 0xc0035d7be8, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:481 +0xa6d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00089bbe0, 0xc0035d7df8, 0xc00089bbe0, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002d40780, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc001ad2400, 0x56112e0, 0xc0016b5b80, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc001ad2400, 0xc002d40780, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001124a40, 0x54fc2e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001e6db30, 0x0, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001e6db30, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001e1d680, 0xc001e6db30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001e1d680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001e1d680, 0xc002883310) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000148230, 0x7f39964ef6c0, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0023ede90, 0x3, 0x3, 0x55b68a0, 0xc00015a8c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc00143a280, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0007d3160, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:50:01.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2063" for this suite. �[91m�[1m• Failure [308.110 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:50:01.860: Unable to read wheezy_hosts@dns-querier-1 from pod dns-2063/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2063/pods/dns-test-fe8aff60-b4b3-438d-a86b-de0b1a9a4081/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:43:33.309: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890 Jan 3 14:43:33.393: INFO: Pod name my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890: Found 0 pods out of 1 Jan 3 14:43:38.402: INFO: Pod name my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890: Found 1 pods out of 1 Jan 3 14:43:38.402: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890" are running Jan 3 14:43:38.408: INFO: Pod "my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890-grpz8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:43:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:43:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:43:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:43:33 +0000 UTC Reason: Message:}]) Jan 3 14:43:38.409: INFO: Trying to dial the pod Jan 3 14:47:16.769: INFO: Controller my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890: Failed to GET from replica 1 [my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890-grpz8]: an error on the server ("unknown") has prevented the request from succeeding (get pods my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890-grpz8) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353813, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353814, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353814, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353813, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"192.168.2.27", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.27"}}, StartTime:(*v1.Time)(0xc0039ce340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0039ce3a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.21", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a", ContainerID:"containerd://6746b69f3b53df1dbf15d537608826f0103793ea34fe3de79e31785e8fdf2ee2", Started:(*bool)(0xc00305e18a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 3 14:50:49.760: INFO: Controller my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890: Failed to GET from replica 1 [my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890-grpz8]: an error on the server ("unknown") has prevented the request from succeeding (get pods my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890-grpz8) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353813, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353814, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353814, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808353813, loc:(*time.Location)(0x798e100)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"192.168.2.27", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.27"}}, StartTime:(*v1.Time)(0xc0039ce340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-092a071d-51d0-4bc2-8bce-80a5f12ad890", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0039ce3a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.21", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a", ContainerID:"containerd://6746b69f3b53df1dbf15d537608826f0103793ea34fe3de79e31785e8fdf2ee2", Started:(*bool)(0xc00305e18a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 3 14:50:49.761: FAIL: Did not get expected responses within the timeout period of 120.00 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func8.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002637c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002637c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002637c80, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:50:49.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-1080" for this suite. �[91m�[1m• Failure [436.461 seconds]�[0m [sig-apps] ReplicationController �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m �[91m�[1mshould serve a basic image on each replica with a public image [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:50:49.761: Did not get expected responses within the timeout period of 120.00 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 �[90m------------------------------�[0m {"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":14,"skipped":217,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:50:49.772: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad Jan 3 14:50:49.808: INFO: Pod name my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad: Found 0 pods out of 1 Jan 3 14:50:54.813: INFO: Pod name my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad: Found 1 pods out of 1 Jan 3 14:50:54.814: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad" are running Jan 3 14:50:54.816: INFO: Pod "my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad-xrbww" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:50:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:50:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:50:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-03 14:50:49 +0000 UTC Reason: Message:}]) Jan 3 14:50:54.817: INFO: Trying to dial the pod Jan 3 14:50:59.828: INFO: Controller my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad: Got expected result from replica 1 [my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad-xrbww]: "my-hostname-basic-d93df339-97dd-4941-ab24-e55d3a37c5ad-xrbww", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:50:59.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-9432" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":15,"skipped":217,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:45:56.850: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3030 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 14:45:56.889: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 14:45:56.939: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:45:58.944: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:00.944: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:02.945: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:04.951: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:06.944: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:08.944: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:46:10.947: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 3 14:46:10.954: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:46:12.958: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:46:14.958: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:46:16.958: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 3 14:46:16.965: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 3 14:46:16.972: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 3 14:46:18.995: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 3 14:46:18.995: INFO: Breadth first check of 192.168.0.27 on host 172.18.0.4... Jan 3 14:46:18.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.0.27&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:18.998: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:19.091: INFO: Waiting for responses: map[] Jan 3 14:46:19.092: INFO: reached 192.168.0.27 after 0/1 tries Jan 3 14:46:19.092: INFO: Breadth first check of 192.168.1.42 on host 172.18.0.7... Jan 3 14:46:19.095: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.1.42&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:19.096: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:19.203: INFO: Waiting for responses: map[] Jan 3 14:46:19.203: INFO: reached 192.168.1.42 after 0/1 tries Jan 3 14:46:19.203: INFO: Breadth first check of 192.168.2.33 on host 172.18.0.6... Jan 3 14:46:19.206: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:19.206: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:24.294: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:46:26.294: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-0: Jan 3 14:46:26.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-0 --namespace=pod-network-test-3030' Jan 3 14:46:26.429: INFO: stderr: "" Jan 3 14:46:26.429: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.27\nIPs:\n IP: 192.168.0.27\nContainers:\n webserver:\n Container ID: containerd://792ed8ae8e93d90b820e67e395638b5253c536dfc92e40710f0a65c57c5da51a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:46:26.429: INFO: Name: netserver-0 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.0.27 IPs: IP: 192.168.0.27 Containers: webserver: Container ID: containerd://792ed8ae8e93d90b820e67e395638b5253c536dfc92e40710f0a65c57c5da51a Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:46:26.429: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-1: Jan 3 14:46:26.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-1 --namespace=pod-network-test-3030' Jan 3 14:46:26.550: INFO: stderr: "" Jan 3 14:46:26.551: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.42\nIPs:\n IP: 192.168.1.42\nContainers:\n webserver:\n Container ID: containerd://54a9ab7ba2114bd2954f14350ba2e199b57a9e7cdafc1ba6ef2424978b3f61aa\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:46:26.551: INFO: Name: netserver-1 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.1.42 IPs: IP: 192.168.1.42 Containers: webserver: Container ID: containerd://54a9ab7ba2114bd2954f14350ba2e199b57a9e7cdafc1ba6ef2424978b3f61aa Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:46:26.551: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-2: Jan 3 14:46:26.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-2 --namespace=pod-network-test-3030' Jan 3 14:46:26.670: INFO: stderr: "" Jan 3 14:46:26.670: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.33\nIPs:\n IP: 192.168.2.33\nContainers:\n webserver:\n Container ID: containerd://cfbbc39644e50ab57a76aa715441e4aaba1d71c1bade513f96f29dd6b75f61bb\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:46:26.670: INFO: Name: netserver-2 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.2.33 IPs: IP: 192.168.2.33 Containers: webserver: Container ID: containerd://cfbbc39644e50ab57a76aa715441e4aaba1d71c1bade513f96f29dd6b75f61bb Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:46:26.670: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-3: Jan 3 14:46:26.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-3 --namespace=pod-network-test-3030' Jan 3 14:46:26.784: INFO: stderr: "" Jan 3 14:46:26.784: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.23\nIPs:\n IP: 192.168.6.23\nContainers:\n webserver:\n Container ID: containerd://8a489d8dabeefb23cafd574654134769fe0a861a77d97a31317ae0dc1fefc5d0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:46:26.784: INFO: Name: netserver-3 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.6.23 IPs: IP: 192.168.6.23 Containers: webserver: Container ID: containerd://8a489d8dabeefb23cafd574654134769fe0a861a77d97a31317ae0dc1fefc5d0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-3030/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:46:26.784: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}]) Jan 3 14:46:26.784: INFO: ...failed...will try again in next pass Jan 3 14:46:26.784: INFO: Breadth first check of 192.168.6.23 on host 172.18.0.5... Jan 3 14:46:26.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.6.23&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:26.787: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:26.879: INFO: Waiting for responses: map[] Jan 3 14:46:26.879: INFO: reached 192.168.6.23 after 0/1 tries Jan 3 14:46:26.879: INFO: Going to retry 1 out of 4 pods.... Jan 3 14:46:26.879: INFO: Doublechecking 1 pods in host 172.18.0.6 which werent seen the first time. Jan 3 14:46:26.879: INFO: Now attempting to probe pod [[[ 192.168.2.33 ]]] Jan 3 14:46:26.883: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:26.883: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:31.965: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:46:33.969: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:33.969: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:39.067: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:46:41.072: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:41.072: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:46.161: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:46:48.165: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:48.165: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:46:53.253: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:46:55.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:46:55.259: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:00.360: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:02.364: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:02.364: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:07.465: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:09.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:09.470: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:14.562: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:16.568: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:16.568: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:21.673: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:23.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:23.678: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:28.762: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:30.766: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:30.766: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:35.866: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:37.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:37.871: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:42.978: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:44.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:44.983: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:50.080: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:52.085: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:52.085: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:47:57.183: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:47:59.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:47:59.188: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:04.288: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:06.292: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:06.293: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:11.372: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:13.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:13.378: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:18.483: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:20.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:20.488: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:25.580: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:27.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:27.585: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:32.666: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:34.670: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:34.670: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:39.751: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:41.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:41.755: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:46.846: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:48.851: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:48.851: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:48:53.937: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:48:55.943: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:48:55.943: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:01.037: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:03.041: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:03.041: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:08.122: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:10.127: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:10.127: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:15.206: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:17.210: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:17.210: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:22.284: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:24.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:24.289: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:29.368: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:31.372: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:31.372: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:36.448: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:38.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:38.452: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:43.552: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:45.555: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:45.556: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:50.645: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:52.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:52.650: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:49:57.727: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:49:59.731: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:49:59.731: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:04.811: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:06.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:06.815: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:11.901: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:13.906: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:13.906: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:18.989: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:20.994: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:20.994: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:26.094: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:28.098: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:28.098: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:33.180: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:35.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:35.184: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:40.273: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:42.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:42.277: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:47.373: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:49.377: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:49.378: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:50:54.468: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:50:56.481: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:50:56.481: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:01.581: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:03.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:03.585: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:08.675: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:10.679: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:10.679: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:15.754: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:17.758: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:17.758: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:22.841: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:24.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:24.845: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:29.928: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:31.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:31.932: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:37.007: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:39.012: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:39.012: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:44.093: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:46.098: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1'] Namespace:pod-network-test-3030 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:51:46.098: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:51:51.185: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:51:53.185: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-0: Jan 3 14:51:53.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-0 --namespace=pod-network-test-3030' Jan 3 14:51:53.310: INFO: stderr: "" Jan 3 14:51:53.310: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.27\nIPs:\n IP: 192.168.0.27\nContainers:\n webserver:\n Container ID: containerd://792ed8ae8e93d90b820e67e395638b5253c536dfc92e40710f0a65c57c5da51a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:51:53.310: INFO: Name: netserver-0 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.0.27 IPs: IP: 192.168.0.27 Containers: webserver: Container ID: containerd://792ed8ae8e93d90b820e67e395638b5253c536dfc92e40710f0a65c57c5da51a Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:51:53.310: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-1: Jan 3 14:51:53.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-1 --namespace=pod-network-test-3030' Jan 3 14:51:53.424: INFO: stderr: "" Jan 3 14:51:53.424: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.42\nIPs:\n IP: 192.168.1.42\nContainers:\n webserver:\n Container ID: containerd://54a9ab7ba2114bd2954f14350ba2e199b57a9e7cdafc1ba6ef2424978b3f61aa\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:51:53.424: INFO: Name: netserver-1 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.1.42 IPs: IP: 192.168.1.42 Containers: webserver: Container ID: containerd://54a9ab7ba2114bd2954f14350ba2e199b57a9e7cdafc1ba6ef2424978b3f61aa Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:51:53.424: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-2: Jan 3 14:51:53.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-2 --namespace=pod-network-test-3030' Jan 3 14:51:53.551: INFO: stderr: "" Jan 3 14:51:53.551: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.33\nIPs:\n IP: 192.168.2.33\nContainers:\n webserver:\n Container ID: containerd://cfbbc39644e50ab57a76aa715441e4aaba1d71c1bade513f96f29dd6b75f61bb\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:51:53.551: INFO: Name: netserver-2 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.2.33 IPs: IP: 192.168.2.33 Containers: webserver: Container ID: containerd://cfbbc39644e50ab57a76aa715441e4aaba1d71c1bade513f96f29dd6b75f61bb Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:51:53.551: INFO: Output of kubectl describe pod pod-network-test-3030/netserver-3: Jan 3 14:51:53.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3030 describe pod netserver-3 --namespace=pod-network-test-3030' Jan 3 14:51:53.667: INFO: stderr: "" Jan 3 14:51:53.667: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-3030\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5\nStart Time: Tue, 03 Jan 2023 14:45:56 +0000\nLabels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.23\nIPs:\n IP: 192.168.6.23\nContainers:\n webserver:\n Container ID: containerd://8a489d8dabeefb23cafd574654134769fe0a861a77d97a31317ae0dc1fefc5d0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:45:57 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4r8sg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4r8sg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:51:53.667: INFO: Name: netserver-3 Namespace: pod-network-test-3030 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5 Start Time: Tue, 03 Jan 2023 14:45:56 +0000 Labels: selector-a2768981-57c7-4866-8cff-287b37f16a8b=true Annotations: <none> Status: Running IP: 192.168.6.23 IPs: IP: 192.168.6.23 Containers: webserver: Container ID: containerd://8a489d8dabeefb23cafd574654134769fe0a861a77d97a31317ae0dc1fefc5d0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:45:57 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4r8sg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4r8sg: Type: Secret (a volume populated by a Secret) SecretName: default-token-4r8sg Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-3030/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:51:53.667: INFO: encountered error during dial (did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}]) Jan 3 14:51:53.667: INFO: ... Done probing pod [[[ 192.168.2.33 ]]] Jan 3 14:51:53.667: INFO: succeeded at polling 3 out of 4 connections Jan 3 14:51:53.667: INFO: pod polling failure summary: Jan 3 14:51:53.667: INFO: Collected error: did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}] Jan 3 14:51:53.668: FAIL: failed, 1 out of 4 connections failed Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func16.1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 +0x69 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00248bc80, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:51:53.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3030" for this suite. �[91m�[1m• Failure [356.829 seconds]�[0m [sig-network] Networking �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27�[0m Granular Checks: Pods �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30�[0m �[91m�[1mshould function for intra-pod communication: http [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:51:53.668: failed, 1 out of 4 connections failed�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:47:20.494: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics W0103 14:47:21.590375 14 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 3 14:52:21.595: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:21.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3210" for this suite. �[32m• [SLOW TEST:301.111 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":41,"skipped":704,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:21.613: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 3 14:52:21.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-827 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 3 14:52:21.765: INFO: stderr: "" Jan 3 14:52:21.765: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 3 14:52:21.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-827 delete pods e2e-test-httpd-pod' Jan 3 14:52:34.877: INFO: stderr: "" Jan 3 14:52:34.877: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:34.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-827" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":42,"skipped":707,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:34.903: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:35.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-1791" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":43,"skipped":716,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:35.077: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8623" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":44,"skipped":740,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:51.216: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-0c483a70-80d0-4ce9-9ce6-742cba402788 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:52:51.266: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a" in namespace "projected-7340" to be "Succeeded or Failed" Jan 3 14:52:51.272: INFO: Pod "pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851975ms Jan 3 14:52:53.276: INFO: Pod "pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008918883s �[1mSTEP�[0m: Saw pod success Jan 3 14:52:53.276: INFO: Pod "pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a" satisfied condition "Succeeded or Failed" Jan 3 14:52:53.278: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:52:53.305: INFO: Waiting for pod pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a to disappear Jan 3 14:52:53.307: INFO: Pod pod-projected-secrets-4b29a348-ff63-4ddc-b474-03fad5baed4a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:53.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7340" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":747,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:53.359: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 3 14:52:53.396: INFO: Waiting up to 5m0s for pod "pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2" in namespace "emptydir-4417" to be "Succeeded or Failed" Jan 3 14:52:53.399: INFO: Pod "pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83637ms Jan 3 14:52:55.404: INFO: Pod "pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007825333s �[1mSTEP�[0m: Saw pod success Jan 3 14:52:55.404: INFO: Pod "pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2" satisfied condition "Succeeded or Failed" Jan 3 14:52:55.407: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:52:55.425: INFO: Waiting for pod pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2 to disappear Jan 3 14:52:55.428: INFO: Pod pod-47f50f44-7afe-4074-a0ae-db00c95fc1c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:55.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4417" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":780,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:55.451: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:55.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1431" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":47,"skipped":788,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:55.529: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: validating cluster-info Jan 3 14:52:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5252 cluster-info' Jan 3 14:52:55.659: INFO: stderr: "" Jan 3 14:52:55.659: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:52:55.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5252" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":48,"skipped":788,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:52:55.719: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:52:56.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 3 14:52:58.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354376, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354376, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354376, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354376, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:53:01.323: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:53:01.327: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-1968-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:02.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7445" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7445-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":49,"skipped":812,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:02.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:53:02.581: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f6bac0a6-4ce8-4284-baea-7246f1d5840d" in namespace "security-context-test-5191" to be "Succeeded or Failed" Jan 3 14:53:02.584: INFO: Pod "alpine-nnp-false-f6bac0a6-4ce8-4284-baea-7246f1d5840d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30702ms Jan 3 14:53:04.589: INFO: Pod "alpine-nnp-false-f6bac0a6-4ce8-4284-baea-7246f1d5840d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00763589s Jan 3 14:53:06.594: INFO: Pod "alpine-nnp-false-f6bac0a6-4ce8-4284-baea-7246f1d5840d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012670473s Jan 3 14:53:06.594: INFO: Pod "alpine-nnp-false-f6bac0a6-4ce8-4284-baea-7246f1d5840d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:06.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-5191" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":813,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:06.624: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test service account token: Jan 3 14:53:06.667: INFO: Waiting up to 5m0s for pod "test-pod-165441cf-acd4-42c7-808d-b66996708adb" in namespace "svcaccounts-6179" to be "Succeeded or Failed" Jan 3 14:53:06.670: INFO: Pod "test-pod-165441cf-acd4-42c7-808d-b66996708adb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.95026ms Jan 3 14:53:08.674: INFO: Pod "test-pod-165441cf-acd4-42c7-808d-b66996708adb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007059526s �[1mSTEP�[0m: Saw pod success Jan 3 14:53:08.674: INFO: Pod "test-pod-165441cf-acd4-42c7-808d-b66996708adb" satisfied condition "Succeeded or Failed" Jan 3 14:53:08.677: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod test-pod-165441cf-acd4-42c7-808d-b66996708adb container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:53:08.702: INFO: Waiting for pod test-pod-165441cf-acd4-42c7-808d-b66996708adb to disappear Jan 3 14:53:08.707: INFO: Pod test-pod-165441cf-acd4-42c7-808d-b66996708adb no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:08.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-6179" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":51,"skipped":821,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:08.737: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-09aec12a-d063-4ffb-9f53-72b6dbd2f93f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:08.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3635" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":52,"skipped":836,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:08.795: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod liveness-eb6666c5-7c11-4ff1-8b14-8cf6367d4e85 in namespace container-probe-7101 Jan 3 14:53:10.838: INFO: Started pod liveness-eb6666c5-7c11-4ff1-8b14-8cf6367d4e85 in namespace container-probe-7101 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 3 14:53:10.842: INFO: Initial restart count of pod liveness-eb6666c5-7c11-4ff1-8b14-8cf6367d4e85 is 0 Jan 3 14:53:32.898: INFO: Restart count of pod container-probe-7101/liveness-eb6666c5-7c11-4ff1-8b14-8cf6367d4e85 is now 1 (22.055899292s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:32.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7101" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":851,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:32.968: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:53:33.045: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"92ab1220-cca1-4d55-84d6-6db0b63baa66", Controller:(*bool)(0xc001e24b3a), BlockOwnerDeletion:(*bool)(0xc001e24b3b)}} Jan 3 14:53:33.068: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"53410ca2-512e-49e7-ac2b-dd7e2d8d4f77", Controller:(*bool)(0xc0045cf44a), BlockOwnerDeletion:(*bool)(0xc0045cf44b)}} Jan 3 14:53:33.075: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"40eb51a1-b173-41db-a29f-7834bc37e433", Controller:(*bool)(0xc001e24d26), BlockOwnerDeletion:(*bool)(0xc001e24d27)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:38.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1403" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":54,"skipped":877,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:38.133: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Jan 3 14:53:38.184: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 3 14:53:38.209: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:38.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-8427" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":55,"skipped":898,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:38.266: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating Agnhost RC Jan 3 14:53:38.308: INFO: namespace kubectl-2216 Jan 3 14:53:38.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2216 create -f -' Jan 3 14:53:39.306: INFO: stderr: "" Jan 3 14:53:39.306: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 3 14:53:40.311: INFO: Selector matched 1 pods for map[app:agnhost] Jan 3 14:53:40.311: INFO: Found 0 / 1 Jan 3 14:53:41.310: INFO: Selector matched 1 pods for map[app:agnhost] Jan 3 14:53:41.310: INFO: Found 1 / 1 Jan 3 14:53:41.310: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 3 14:53:41.314: INFO: Selector matched 1 pods for map[app:agnhost] Jan 3 14:53:41.314: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 3 14:53:41.314: INFO: wait on agnhost-primary startup in kubectl-2216 Jan 3 14:53:41.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2216 logs agnhost-primary-cdjx5 agnhost-primary' Jan 3 14:53:41.416: INFO: stderr: "" Jan 3 14:53:41.416: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 3 14:53:41.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2216 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 3 14:53:41.542: INFO: stderr: "" Jan 3 14:53:41.542: INFO: stdout: "service/rm2 exposed\n" Jan 3 14:53:41.549: INFO: Service rm2 in namespace kubectl-2216 found. �[1mSTEP�[0m: exposing service Jan 3 14:53:43.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2216 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 3 14:53:43.674: INFO: stderr: "" Jan 3 14:53:43.674: INFO: stdout: "service/rm3 exposed\n" Jan 3 14:53:43.683: INFO: Service rm3 in namespace kubectl-2216 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:45.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2216" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":56,"skipped":910,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:45.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Jan 3 14:53:45.770: INFO: Waiting up to 5m0s for pod "pod-0784a288-15fc-45f6-99d1-dd311bcb735e" in namespace "emptydir-7929" to be "Succeeded or Failed" Jan 3 14:53:45.774: INFO: Pod "pod-0784a288-15fc-45f6-99d1-dd311bcb735e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347421ms Jan 3 14:53:47.779: INFO: Pod "pod-0784a288-15fc-45f6-99d1-dd311bcb735e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008614707s �[1mSTEP�[0m: Saw pod success Jan 3 14:53:47.779: INFO: Pod "pod-0784a288-15fc-45f6-99d1-dd311bcb735e" satisfied condition "Succeeded or Failed" Jan 3 14:53:47.782: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh pod pod-0784a288-15fc-45f6-99d1-dd311bcb735e container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:53:47.820: INFO: Waiting for pod pod-0784a288-15fc-45f6-99d1-dd311bcb735e to disappear Jan 3 14:53:47.825: INFO: Pod pod-0784a288-15fc-45f6-99d1-dd311bcb735e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:47.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7929" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":922,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:47.919: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override all Jan 3 14:53:47.965: INFO: Waiting up to 5m0s for pod "client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31" in namespace "containers-6146" to be "Succeeded or Failed" Jan 3 14:53:47.969: INFO: Pod "client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167468ms Jan 3 14:53:49.973: INFO: Pod "client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007805701s �[1mSTEP�[0m: Saw pod success Jan 3 14:53:49.973: INFO: Pod "client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31" satisfied condition "Succeeded or Failed" Jan 3 14:53:49.976: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh pod client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:53:49.992: INFO: Waiting for pod client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31 to disappear Jan 3 14:53:49.995: INFO: Pod client-containers-14ae3a2d-1071-4cc5-817c-7ef0553e6c31 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:49.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-6146" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":957,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:50.036: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 3 14:53:50.087: INFO: Waiting up to 5m0s for pod "pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6" in namespace "emptydir-7784" to be "Succeeded or Failed" Jan 3 14:53:50.091: INFO: Pod "pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30273ms Jan 3 14:53:52.095: INFO: Pod "pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007440641s �[1mSTEP�[0m: Saw pod success Jan 3 14:53:52.095: INFO: Pod "pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6" satisfied condition "Succeeded or Failed" Jan 3 14:53:52.098: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh pod pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:53:52.114: INFO: Waiting for pod pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6 to disappear Jan 3 14:53:52.117: INFO: Pod pod-d909a8a7-7b7b-4e9f-95e6-0acc4385a9d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:52.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7784" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":972,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:52.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 3 14:53:52.225: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 3 14:53:52.236: INFO: waiting for watch events with expected annotations Jan 3 14:53:52.236: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:52.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-5172" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":60,"skipped":994,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:52.338: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 14:53:52.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 3 14:53:55.918: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:53:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2570" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2570-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":61,"skipped":1035,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:53:56.216: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-8336 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-8336 Jan 3 14:53:56.345: INFO: Found 0 stateful pods, waiting for 1 Jan 3 14:54:06.349: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 3 14:54:06.366: INFO: Deleting all statefulset in ns statefulset-8336 Jan 3 14:54:06.370: INFO: Scaling statefulset ss to 0 Jan 3 14:54:26.396: INFO: Waiting for statefulset status.replicas updated to 0 Jan 3 14:54:26.399: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:54:26.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8336" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":62,"skipped":1049,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:54:26.448: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 3 14:54:26.487: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:54:31.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3037" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1058,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:54:31.929: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-301 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-301 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-301 I0103 14:54:31.980058 14 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-301, replica count: 3 I0103 14:54:35.031463 14 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 14:54:35.038: INFO: Creating new exec pod Jan 3 14:54:38.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-301 exec execpod-affinityb97dl -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 3 14:54:38.468: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 3 14:54:38.468: INFO: stdout: "" Jan 3 14:54:38.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-301 exec execpod-affinityb97dl -- /bin/sh -x -c nc -zv -t -w 2 10.134.100.25 80' Jan 3 14:54:38.664: INFO: stderr: "+ nc -zv -t -w 2 10.134.100.25 80\nConnection to 10.134.100.25 80 port [tcp/http] succeeded!\n" Jan 3 14:54:38.664: INFO: stdout: "" Jan 3 14:54:38.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-301 exec execpod-affinityb97dl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.134.100.25:80/ ; done' Jan 3 14:54:38.948: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.134.100.25:80/\n" Jan 3 14:54:38.948: INFO: stdout: "\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg\naffinity-clusterip-6s5fg" Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Received response from host: affinity-clusterip-6s5fg Jan 3 14:54:38.948: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-301, will wait for the garbage collector to delete the pods Jan 3 14:54:39.033: INFO: Deleting ReplicationController affinity-clusterip took: 5.695808ms Jan 3 14:54:39.533: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.395809ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:54:50.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-301" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":64,"skipped":1072,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:54:50.177: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:54:50.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf" in namespace "downward-api-3870" to be "Succeeded or Failed" Jan 3 14:54:50.238: INFO: Pod "downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061268ms Jan 3 14:54:52.242: INFO: Pod "downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008317084s �[1mSTEP�[0m: Saw pod success Jan 3 14:54:52.242: INFO: Pod "downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf" satisfied condition "Succeeded or Failed" Jan 3 14:54:52.246: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:54:52.272: INFO: Waiting for pod downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf to disappear Jan 3 14:54:52.276: INFO: Pod downwardapi-volume-a7605a3f-d142-45b4-854a-9ff0051b3cbf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:54:52.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3870" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1072,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:50:59.843: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod busybox-853b2b4e-a4e2-4566-86a9-4b23b6193d78 in namespace container-probe-9189 Jan 3 14:51:01.893: INFO: Started pod busybox-853b2b4e-a4e2-4566-86a9-4b23b6193d78 in namespace container-probe-9189 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 3 14:51:01.897: INFO: Initial restart count of pod busybox-853b2b4e-a4e2-4566-86a9-4b23b6193d78 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:02.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9189" for this suite. �[32m• [SLOW TEST:242.612 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":218,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":88,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:50:01.902: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5546.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5546.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5546.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5546.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:53:37.696: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-5546.svc.cluster.local from pod dns-5546/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43: an error on the server ("unknown") has prevented the request from succeeding (get pods dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43) Jan 3 14:55:03.979: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5546/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-5546/pods/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0030c3d00, 0xc0035d7df8, 0xc0030c3d00, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e400, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc00069d800, 0x56112e0, 0xc00184e580, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc00069d800, 0xc00389e400, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 E0103 14:55:03.980375 16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 3 14:55:03.979: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5546/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-5546/pods/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0030c3d00, 0xc0035d7df8, 0xc0030c3d00, 0xc0035d7df8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e400, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc00069d800, 0x56112e0, 0xc00184e580, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc00069d800, 0xc00389e400, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc0019dd680, 0x4fc9940)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 110 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x499f1e0, 0xc0038024c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x499f1e0, 0xc0038024c0) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0032de500, 0x12f, 0x77a462c, 0x7d, 0xd3, 0xc002676000, 0x7fb) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0032de500, 0x12f, 0xc0035d78a0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0032de500, 0x12f, 0xc0035d7988, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e68bfb, 0x24, 0xc0035d7be8, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:481 +0xa6d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0030c3d00, 0xc0035d7df8, 0xc0030c3d00, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e400, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc00069d800, 0x56112e0, 0xc00184e580, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc00069d800, 0xc00389e400, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001124a40, 0x54fc2e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001e6db30, 0x0, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001e6db30, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001e1d680, 0xc001e6db30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001e1d680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001e1d680, 0xc002883310) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000148230, 0x7f39964ef6c0, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0023ede90, 0x3, 0x3, 0x55b68a0, 0xc00015a8c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc00143a280, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0007d3160, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5546" for this suite. �[91m�[1m• Failure [302.102 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:55:03.979: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5546/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-5546/pods/dns-test-454c8c21-2973-4b8a-8998-29ed8ee68d43/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:02.461: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod pod-subpath-test-secret-zw2b �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 3 14:55:02.508: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zw2b" in namespace "subpath-9052" to be "Succeeded or Failed" Jan 3 14:55:02.512: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523374ms Jan 3 14:55:04.516: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 2.008108926s Jan 3 14:55:06.521: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 4.013173593s Jan 3 14:55:08.526: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 6.017703148s Jan 3 14:55:10.530: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 8.021813535s Jan 3 14:55:12.535: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 10.026492971s Jan 3 14:55:14.539: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 12.030721008s Jan 3 14:55:16.543: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 14.035229254s Jan 3 14:55:18.548: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 16.039718425s Jan 3 14:55:20.553: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 18.044511912s Jan 3 14:55:22.557: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Running", Reason="", readiness=true. Elapsed: 20.048850251s Jan 3 14:55:24.562: INFO: Pod "pod-subpath-test-secret-zw2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053568409s �[1mSTEP�[0m: Saw pod success Jan 3 14:55:24.562: INFO: Pod "pod-subpath-test-secret-zw2b" satisfied condition "Succeeded or Failed" Jan 3 14:55:24.566: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-subpath-test-secret-zw2b container test-container-subpath-secret-zw2b: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:55:24.593: INFO: Waiting for pod pod-subpath-test-secret-zw2b to disappear Jan 3 14:55:24.596: INFO: Pod pod-subpath-test-secret-zw2b no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-secret-zw2b Jan 3 14:55:24.596: INFO: Deleting pod "pod-subpath-test-secret-zw2b" in namespace "subpath-9052" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:24.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-9052" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":220,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:24.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 3 14:55:24.710: INFO: Waiting up to 5m0s for pod "pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79" in namespace "emptydir-4346" to be "Succeeded or Failed" Jan 3 14:55:24.715: INFO: Pod "pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694977ms Jan 3 14:55:26.724: INFO: Pod "pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01302545s �[1mSTEP�[0m: Saw pod success Jan 3 14:55:26.724: INFO: Pod "pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79" satisfied condition "Succeeded or Failed" Jan 3 14:55:26.727: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:55:26.751: INFO: Waiting for pod pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79 to disappear Jan 3 14:55:26.755: INFO: Pod pod-7ee90337-74e9-4d11-8a91-e0bac95f1c79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:26.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4346" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":251,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:26.778: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Jan 3 14:55:26.826: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 3 14:55:31.829: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:32.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-7364" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":19,"skipped":256,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:32.865: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:32.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-3913" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":20,"skipped":262,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:33.106: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:55:33.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4" in namespace "downward-api-1900" to be "Succeeded or Failed" Jan 3 14:55:33.151: INFO: Pod "downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823025ms Jan 3 14:55:35.155: INFO: Pod "downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009246344s �[1mSTEP�[0m: Saw pod success Jan 3 14:55:35.155: INFO: Pod "downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4" satisfied condition "Succeeded or Failed" Jan 3 14:55:35.159: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh pod downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:55:35.185: INFO: Waiting for pod downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4 to disappear Jan 3 14:55:35.187: INFO: Pod downwardapi-volume-848f67e8-59c1-4873-a752-48f9d16e9fb4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:55:35.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1900" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":366,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:35.219: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod busybox-6b55aa1d-5ce1-44db-afc8-27962d79d8cd in namespace container-probe-9054 Jan 3 14:55:37.272: INFO: Started pod busybox-6b55aa1d-5ce1-44db-afc8-27962d79d8cd in namespace container-probe-9054 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 3 14:55:37.276: INFO: Initial restart count of pod busybox-6b55aa1d-5ce1-44db-afc8-27962d79d8cd is 0 Jan 3 14:56:27.391: INFO: Restart count of pod container-probe-9054/busybox-6b55aa1d-5ce1-44db-afc8-27962d79d8cd is now 1 (50.115222628s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:27.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9054" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":380,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:27.474: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:56:27.507: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:28.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":23,"skipped":419,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:28.090: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:56:28.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938" in namespace "downward-api-5028" to be "Succeeded or Failed" Jan 3 14:56:28.133: INFO: Pod "downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938": Phase="Pending", Reason="", readiness=false. Elapsed: 3.675655ms Jan 3 14:56:30.137: INFO: Pod "downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008211857s �[1mSTEP�[0m: Saw pod success Jan 3 14:56:30.137: INFO: Pod "downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938" satisfied condition "Succeeded or Failed" Jan 3 14:56:30.141: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:56:30.160: INFO: Waiting for pod downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938 to disappear Jan 3 14:56:30.163: INFO: Pod downwardapi-volume-ea2698b7-0fc7-4e5a-8137-574d8fef4938 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:30.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5028" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":446,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:30.217: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 3 14:56:30.247: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the sample API server. Jan 3 14:56:30.841: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Jan 3 14:56:32.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:56:34.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354590, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:56:38.049: INFO: Waited 1.113988975s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:38.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-3927" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":25,"skipped":475,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:38.842: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:56:39.143: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:41.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1462" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":495,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:41.449: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 3 14:56:45.527: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:45.530: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:47.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:47.534: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:49.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:49.536: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:51.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:51.535: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:53.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:53.535: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:55.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:55.534: INFO: Pod pod-with-prestop-http-hook still exists Jan 3 14:56:57.531: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 3 14:56:57.534: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:56:57.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-1588" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":508,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:56:57.579: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:56:57.610: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:03.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-5081" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":28,"skipped":521,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:57:03.846: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Jan 3 14:57:03.883: INFO: Waiting up to 5m0s for pod "var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8" in namespace "var-expansion-3096" to be "Succeeded or Failed" Jan 3 14:57:03.888: INFO: Pod "var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.72253ms Jan 3 14:57:05.892: INFO: Pod "var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007797517s �[1mSTEP�[0m: Saw pod success Jan 3 14:57:05.892: INFO: Pod "var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8" satisfied condition "Succeeded or Failed" Jan 3 14:57:05.895: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:57:05.910: INFO: Waiting for pod var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8 to disappear Jan 3 14:57:05.913: INFO: Pod var-expansion-0357bf6c-fddc-4ba1-a56f-360fd19792b8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:05.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3096" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":545,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:57:05.926: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 3 14:57:07.978: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:07.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-7214" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":547,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:57:08.052: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:08.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-9120" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":31,"skipped":577,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:57:08.137: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of pod templates Jan 3 14:57:08.175: INFO: created test-podtemplate-1 Jan 3 14:57:08.180: INFO: created test-podtemplate-2 Jan 3 14:57:08.185: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 3 14:57:08.189: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 3 14:57:08.204: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:08.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-1628" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":32,"skipped":581,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":255,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:51:53.683: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-9809 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 14:51:53.718: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 14:51:53.761: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:51:55.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:51:57.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:51:59.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:52:01.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:52:03.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:52:05.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:52:07.765: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 3 14:52:07.773: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:52:09.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:52:11.778: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:52:13.778: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 3 14:52:13.784: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 3 14:52:13.790: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 3 14:52:15.811: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 3 14:52:15.811: INFO: Breadth first check of 192.168.0.31 on host 172.18.0.4... Jan 3 14:52:15.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.0.31&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:15.815: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:15.926: INFO: Waiting for responses: map[] Jan 3 14:52:15.926: INFO: reached 192.168.0.31 after 0/1 tries Jan 3 14:52:15.926: INFO: Breadth first check of 192.168.1.55 on host 172.18.0.7... Jan 3 14:52:15.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.1.55&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:15.930: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:16.016: INFO: Waiting for responses: map[] Jan 3 14:52:16.016: INFO: reached 192.168.1.55 after 0/1 tries Jan 3 14:52:16.016: INFO: Breadth first check of 192.168.2.35 on host 172.18.0.6... Jan 3 14:52:16.019: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:16.019: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:21.106: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:23.106: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-0: Jan 3 14:52:23.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-0 --namespace=pod-network-test-9809' Jan 3 14:52:23.235: INFO: stderr: "" Jan 3 14:52:23.235: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.31\nIPs:\n IP: 192.168.0.31\nContainers:\n webserver:\n Container ID: containerd://6a44a102066dbfb6db8850fac18977615012b383b30db9ea3586c9fc132805e0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:52:23.235: INFO: Name: netserver-0 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.0.31 IPs: IP: 192.168.0.31 Containers: webserver: Container ID: containerd://6a44a102066dbfb6db8850fac18977615012b383b30db9ea3586c9fc132805e0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:52:23.235: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-1: Jan 3 14:52:23.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-1 --namespace=pod-network-test-9809' Jan 3 14:52:23.362: INFO: stderr: "" Jan 3 14:52:23.363: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.55\nIPs:\n IP: 192.168.1.55\nContainers:\n webserver:\n Container ID: containerd://e7664fa6c35f5dd983dc70332d276968f7822e507aa2377f9e6a13e057f8d8cd\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:52:23.363: INFO: Name: netserver-1 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.1.55 IPs: IP: 192.168.1.55 Containers: webserver: Container ID: containerd://e7664fa6c35f5dd983dc70332d276968f7822e507aa2377f9e6a13e057f8d8cd Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:52:23.363: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-2: Jan 3 14:52:23.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-2 --namespace=pod-network-test-9809' Jan 3 14:52:23.484: INFO: stderr: "" Jan 3 14:52:23.484: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.35\nIPs:\n IP: 192.168.2.35\nContainers:\n webserver:\n Container ID: containerd://6b490ed6fd03cfac1408320b677611c80200bcf3e0a8ffe66e1e333eed1241d2\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:52:23.485: INFO: Name: netserver-2 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.2.35 IPs: IP: 192.168.2.35 Containers: webserver: Container ID: containerd://6b490ed6fd03cfac1408320b677611c80200bcf3e0a8ffe66e1e333eed1241d2 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:52:23.485: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-3: Jan 3 14:52:23.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-3 --namespace=pod-network-test-9809' Jan 3 14:52:23.618: INFO: stderr: "" Jan 3 14:52:23.618: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.29\nIPs:\n IP: 192.168.6.29\nContainers:\n webserver:\n Container ID: containerd://ace6afe71e367f6d88ca217807a3178216e6b4b88214a9ea5ce54ba0684a6772\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\n Normal Pulled 29s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 29s kubelet Created container webserver\n Normal Started 29s kubelet Started container webserver\n" Jan 3 14:52:23.618: INFO: Name: netserver-3 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.6.29 IPs: IP: 192.168.6.29 Containers: webserver: Container ID: containerd://ace6afe71e367f6d88ca217807a3178216e6b4b88214a9ea5ce54ba0684a6772 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned pod-network-test-9809/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Normal Pulled 29s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 29s kubelet Created container webserver Normal Started 29s kubelet Started container webserver Jan 3 14:52:23.619: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}]) Jan 3 14:52:23.619: INFO: ...failed...will try again in next pass Jan 3 14:52:23.619: INFO: Breadth first check of 192.168.6.29 on host 172.18.0.5... Jan 3 14:52:23.622: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.6.29&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:23.622: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:23.716: INFO: Waiting for responses: map[] Jan 3 14:52:23.716: INFO: reached 192.168.6.29 after 0/1 tries Jan 3 14:52:23.716: INFO: Going to retry 1 out of 4 pods.... Jan 3 14:52:23.716: INFO: Doublechecking 1 pods in host 172.18.0.6 which werent seen the first time. Jan 3 14:52:23.716: INFO: Now attempting to probe pod [[[ 192.168.2.35 ]]] Jan 3 14:52:23.719: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:23.719: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:28.809: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:30.816: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:30.816: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:35.892: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:37.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:37.896: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:42.997: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:45.001: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:45.002: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:50.077: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:52.081: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:52.081: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:52:57.169: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:52:59.173: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:52:59.173: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:04.260: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:06.264: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:06.264: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:11.345: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:13.349: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:13.350: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:18.422: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:20.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:20.426: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:25.504: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:27.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:27.508: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:32.592: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:34.597: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:34.597: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:39.698: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:41.704: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:41.704: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:46.790: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:48.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:48.794: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:53:53.874: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:53:55.879: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:53:55.879: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:00.985: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:02.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:02.989: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:08.079: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:10.083: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:10.083: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:15.162: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:17.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:17.167: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:22.255: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:24.260: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:24.260: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:29.351: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:31.356: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:31.356: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:36.450: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:38.454: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:38.454: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:43.548: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:45.557: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:45.557: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:50.670: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:52.675: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:52.675: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:54:57.760: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:54:59.766: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:54:59.766: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:04.840: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:06.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:06.845: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:11.934: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:13.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:13.938: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:19.023: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:21.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:21.027: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:26.109: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:28.114: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:28.114: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:33.206: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:35.210: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:35.210: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:40.304: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:42.309: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:42.309: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:47.394: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:49.398: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:49.398: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:55:54.497: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:55:56.502: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:55:56.502: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:01.588: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:03.592: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:03.592: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:08.680: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:10.684: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:10.684: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:15.770: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:17.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:17.774: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:22.853: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:24.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:24.857: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:29.954: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:31.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:31.959: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:37.067: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:39.137: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:39.137: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:44.266: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:46.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:46.271: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:51.360: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:56:53.365: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:56:53.365: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:56:58.466: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:00.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:00.470: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:05.559: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:07.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:07.563: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:12.654: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:14.658: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:14.658: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:19.741: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:21.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:21.747: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:26.838: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:28.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:28.842: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:33.931: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:35.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:35.936: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:41.029: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:43.033: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1'] Namespace:pod-network-test-9809 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:57:43.034: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:57:48.123: INFO: Waiting for responses: map[netserver-2:{}] Jan 3 14:57:50.123: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-0: Jan 3 14:57:50.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-0 --namespace=pod-network-test-9809' Jan 3 14:57:50.246: INFO: stderr: "" Jan 3 14:57:50.246: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.31\nIPs:\n IP: 192.168.0.31\nContainers:\n webserver:\n Container ID: containerd://6a44a102066dbfb6db8850fac18977615012b383b30db9ea3586c9fc132805e0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:57:50.246: INFO: Name: netserver-0 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh/172.18.0.4 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.0.31 IPs: IP: 192.168.0.31 Containers: webserver: Container ID: containerd://6a44a102066dbfb6db8850fac18977615012b383b30db9ea3586c9fc132805e0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-0 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-sf9xh Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:57:50.246: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-1: Jan 3 14:57:50.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-1 --namespace=pod-network-test-9809' Jan 3 14:57:50.361: INFO: stderr: "" Jan 3 14:57:50.361: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.55\nIPs:\n IP: 192.168.1.55\nContainers:\n webserver:\n Container ID: containerd://e7664fa6c35f5dd983dc70332d276968f7822e507aa2377f9e6a13e057f8d8cd\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:57:50.361: INFO: Name: netserver-1 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.1.55 IPs: IP: 192.168.1.55 Containers: webserver: Container ID: containerd://e7664fa6c35f5dd983dc70332d276968f7822e507aa2377f9e6a13e057f8d8cd Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-1 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:57:50.361: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-2: Jan 3 14:57:50.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-2 --namespace=pod-network-test-9809' Jan 3 14:57:50.483: INFO: stderr: "" Jan 3 14:57:50.483: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.35\nIPs:\n IP: 192.168.2.35\nContainers:\n webserver:\n Container ID: containerd://6b490ed6fd03cfac1408320b677611c80200bcf3e0a8ffe66e1e333eed1241d2\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:57:50.483: INFO: Name: netserver-2 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-erlai2/172.18.0.6 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.2.35 IPs: IP: 192.168.2.35 Containers: webserver: Container ID: containerd://6b490ed6fd03cfac1408320b677611c80200bcf3e0a8ffe66e1e333eed1241d2 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-2 to k8s-upgrade-and-conformance-1wcp0z-worker-erlai2 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:57:50.483: INFO: Output of kubectl describe pod pod-network-test-9809/netserver-3: Jan 3 14:57:50.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-9809 describe pod netserver-3 --namespace=pod-network-test-9809' Jan 3 14:57:50.599: INFO: stderr: "" Jan 3 14:57:50.599: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-9809\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5\nStart Time: Tue, 03 Jan 2023 14:51:53 +0000\nLabels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.29\nIPs:\n IP: 192.168.6.29\nContainers:\n webserver:\n Container ID: containerd://ace6afe71e367f6d88ca217807a3178216e6b4b88214a9ea5ce54ba0684a6772\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Ports: 8080/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8080\n --udp-port=8081\n State: Running\n Started: Tue, 03 Jan 2023 14:51:54 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ksmxd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ksmxd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\n Normal Pulled 5m56s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 5m56s kubelet Created container webserver\n Normal Started 5m56s kubelet Started container webserver\n" Jan 3 14:57:50.600: INFO: Name: netserver-3 Namespace: pod-network-test-9809 Priority: 0 Node: k8s-upgrade-and-conformance-1wcp0z-worker-u044o2/172.18.0.5 Start Time: Tue, 03 Jan 2023 14:51:53 +0000 Labels: selector-643d7291-f4a6-4184-a859-63ae864540e7=true Annotations: <none> Status: Running IP: 192.168.6.29 IPs: IP: 192.168.6.29 Containers: webserver: Container ID: containerd://ace6afe71e367f6d88ca217807a3178216e6b4b88214a9ea5ce54ba0684a6772 Image: k8s.gcr.io/e2e-test-images/agnhost:2.21 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a Ports: 8080/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8080 --udp-port=8081 State: Running Started: Tue, 03 Jan 2023 14:51:54 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ksmxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-ksmxd: Type: Secret (a volume populated by a Secret) SecretName: default-token-ksmxd Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m57s default-scheduler Successfully assigned pod-network-test-9809/netserver-3 to k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 Normal Pulled 5m56s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine Normal Created 5m56s kubelet Created container webserver Normal Started 5m56s kubelet Started container webserver Jan 3 14:57:50.600: INFO: encountered error during dial (did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}]) Jan 3 14:57:50.600: INFO: ... Done probing pod [[[ 192.168.2.35 ]]] Jan 3 14:57:50.600: INFO: succeeded at polling 3 out of 4 connections Jan 3 14:57:50.600: INFO: pod polling failure summary: Jan 3 14:57:50.600: INFO: Collected error: did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.1.56:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8080&tries=1' retrieved map[] expected map[netserver-2:{}] Jan 3 14:57:50.600: FAIL: failed, 1 out of 4 connections failed Full Stack Trace k8s.io/kubernetes/test/e2e/common.glob..func16.1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 +0x69 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00248bc80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00248bc80, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:57:50.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-9809" for this suite. �[91m�[1m• Failure [356.930 seconds]�[0m [sig-network] Networking �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27�[0m Granular Checks: Pods �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30�[0m �[91m�[1mshould function for intra-pod communication: http [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 14:57:50.600: failed, 1 out of 4 connections failed�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:82 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":255,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:57:50.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-534 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 14:57:50.653: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 14:57:50.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 14:57:52.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:57:54.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:57:56.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:57:58.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:58:00.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:58:02.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:58:04.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:58:06.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 3 14:58:08.696: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 3 14:58:08.702: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 3 14:58:10.706: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 3 14:58:10.712: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 3 14:58:10.717: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 3 14:58:12.735: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 3 14:58:12.735: INFO: Breadth first check of 192.168.0.46 on host 172.18.0.4... Jan 3 14:58:12.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.0.46&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:58:12.738: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:58:12.822: INFO: Waiting for responses: map[] Jan 3 14:58:12.822: INFO: reached 192.168.0.46 after 0/1 tries Jan 3 14:58:12.822: INFO: Breadth first check of 192.168.1.72 on host 172.18.0.7... Jan 3 14:58:12.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.1.72&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:58:12.825: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:58:12.926: INFO: Waiting for responses: map[] Jan 3 14:58:12.926: INFO: reached 192.168.1.72 after 0/1 tries Jan 3 14:58:12.927: INFO: Breadth first check of 192.168.2.39 on host 172.18.0.6... Jan 3 14:58:12.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.2.39&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:58:12.930: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:58:13.019: INFO: Waiting for responses: map[] Jan 3 14:58:13.019: INFO: reached 192.168.2.39 after 0/1 tries Jan 3 14:58:13.019: INFO: Breadth first check of 192.168.6.45 on host 172.18.0.5... Jan 3 14:58:13.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.6.45&port=8080&tries=1'] Namespace:pod-network-test-534 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 3 14:58:13.022: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 3 14:58:13.104: INFO: Waiting for responses: map[] Jan 3 14:58:13.104: INFO: reached 192.168.6.45 after 0/1 tries Jan 3 14:58:13.104: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:13.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-534" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":255,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:13.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-968299ba-41ec-44b9-9d7d-fd4359a186ea �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-da49ecbc-f339-44a5-b14a-c5505371a0e6 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 3 14:58:13.292: INFO: Waiting up to 5m0s for pod "projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88" in namespace "projected-900" to be "Succeeded or Failed" Jan 3 14:58:13.297: INFO: Pod "projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542827ms Jan 3 14:58:15.301: INFO: Pod "projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008051128s �[1mSTEP�[0m: Saw pod success Jan 3 14:58:15.301: INFO: Pod "projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88" satisfied condition "Succeeded or Failed" Jan 3 14:58:15.304: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6 pod projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88 container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:58:15.328: INFO: Waiting for pod projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88 to disappear Jan 3 14:58:15.332: INFO: Pod projected-volume-a55fb55e-452f-414d-82a7-6012aaca6a88 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:15.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-900" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":347,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:15.402: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:58:15.445: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c7d2f594-d951-40c0-b10b-6b5480a6cd9e" in namespace "security-context-test-2684" to be "Succeeded or Failed" Jan 3 14:58:15.449: INFO: Pod "busybox-privileged-false-c7d2f594-d951-40c0-b10b-6b5480a6cd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616823ms Jan 3 14:58:17.453: INFO: Pod "busybox-privileged-false-c7d2f594-d951-40c0-b10b-6b5480a6cd9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007790053s Jan 3 14:58:17.453: INFO: Pod "busybox-privileged-false-c7d2f594-d951-40c0-b10b-6b5480a6cd9e" satisfied condition "Succeeded or Failed" Jan 3 14:58:17.460: INFO: Got logs for pod "busybox-privileged-false-c7d2f594-d951-40c0-b10b-6b5480a6cd9e": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:17.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-2684" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":387,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:17.527: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:58:19.598: INFO: DNS probes using dns-test-14faeea0-4cfd-4069-8cda-24bc916ffb2c succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:58:21.652: INFO: File wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local from pod dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 3 14:58:21.656: INFO: Lookups using dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e failed for: [wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local] Jan 3 14:58:26.660: INFO: File wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local from pod dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 3 14:58:26.663: INFO: File jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local from pod dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 3 14:58:26.663: INFO: Lookups using dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e failed for: [wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local] Jan 3 14:58:31.660: INFO: File wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local from pod dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 3 14:58:31.664: INFO: File jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local from pod dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 3 14:58:31.664: INFO: Lookups using dns-5590/dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e failed for: [wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local] Jan 3 14:58:36.664: INFO: DNS probes using dns-test-2aa2aa1e-a173-4b5e-b98a-f8288505a96e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5590.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5590.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:58:38.751: INFO: DNS probes using dns-test-5cb05ee9-6be1-48cd-8bd7-5833b855f57e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:38.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5590" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":13,"skipped":425,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:38.854: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 14:58:38.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e" in namespace "projected-8893" to be "Succeeded or Failed" Jan 3 14:58:38.925: INFO: Pod "downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024627ms Jan 3 14:58:40.931: INFO: Pod "downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011806446s �[1mSTEP�[0m: Saw pod success Jan 3 14:58:40.931: INFO: Pod "downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e" satisfied condition "Succeeded or Failed" Jan 3 14:58:40.936: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:58:40.970: INFO: Waiting for pod downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e to disappear Jan 3 14:58:40.974: INFO: Pod downwardapi-volume-a09143d4-83da-4226-9d77-b603f535dd0e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:40.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":431,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:40.991: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-4744 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-4744 I0103 14:58:41.070799 20 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4744, replica count: 2 I0103 14:58:44.121276 20 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 14:58:44.121: INFO: Creating new exec pod Jan 3 14:58:47.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4744 exec execpod7wvlz -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 3 14:58:47.305: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 3 14:58:47.305: INFO: stdout: "" Jan 3 14:58:47.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4744 exec execpod7wvlz -- /bin/sh -x -c nc -zv -t -w 2 10.143.2.122 80' Jan 3 14:58:47.464: INFO: stderr: "+ nc -zv -t -w 2 10.143.2.122 80\nConnection to 10.143.2.122 80 port [tcp/http] succeeded!\n" Jan 3 14:58:47.465: INFO: stdout: "" Jan 3 14:58:47.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4744 exec execpod7wvlz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 30723' Jan 3 14:58:47.629: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 30723\nConnection to 172.18.0.5 30723 port [tcp/30723] succeeded!\n" Jan 3 14:58:47.629: INFO: stdout: "" Jan 3 14:58:47.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4744 exec execpod7wvlz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30723' Jan 3 14:58:47.810: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 30723\nConnection to 172.18.0.6 30723 port [tcp/30723] succeeded!\n" Jan 3 14:58:47.810: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:58:47.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4744" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":15,"skipped":433,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:58:47.822: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 14:58:47.862: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 3 14:58:52.868: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 3 14:58:52.868: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 3 14:58:54.874: INFO: Creating deployment "test-rollover-deployment" Jan 3 14:58:54.883: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 3 14:58:56.891: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 3 14:58:56.900: INFO: Ensure that both replica sets have 1 created replica Jan 3 14:58:56.912: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 3 14:58:56.929: INFO: Updating deployment test-rollover-deployment Jan 3 14:58:56.930: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 3 14:58:58.940: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 3 14:58:58.946: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 3 14:58:58.952: INFO: all replica sets need to contain the pod-template-hash label Jan 3 14:58:58.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354738, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:59:00.960: INFO: all replica sets need to contain the pod-template-hash label Jan 3 14:59:00.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354738, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:59:02.964: INFO: all replica sets need to contain the pod-template-hash label Jan 3 14:59:02.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354738, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:59:04.961: INFO: all replica sets need to contain the pod-template-hash label Jan 3 14:59:04.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354738, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:59:06.960: INFO: all replica sets need to contain the pod-template-hash label Jan 3 14:59:06.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354738, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63808354734, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 14:59:08.961: INFO: Jan 3 14:59:08.961: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 3 14:59:08.971: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3405 0cd8b611-19f9-4076-8729-f1a6f31d57d1 11114 2 2023-01-03 14:58:54 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-03 14:58:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003956598 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-03 14:58:54 +0000 UTC,LastTransitionTime:2023-01-03 14:58:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2023-01-03 14:59:08 +0000 UTC,LastTransitionTime:2023-01-03 14:58:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 3 14:59:08.976: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-3405 973db087-150a-4f6c-b1e6-78c64186e7c7 11103 2 2023-01-03 14:58:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0cd8b611-19f9-4076-8729-f1a6f31d57d1 0xc003956b27 0xc003956b28}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cd8b611-19f9-4076-8729-f1a6f31d57d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003956bb8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:59:08.976: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 3 14:59:08.976: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3405 28a169ad-efff-4a97-ae7a-0b7c2bbeed74 11113 2 2023-01-03 14:58:47 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0cd8b611-19f9-4076-8729-f1a6f31d57d1 0xc003956907 0xc003956908}] [] [{e2e.test Update apps/v1 2023-01-03 14:58:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-03 14:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cd8b611-19f9-4076-8729-f1a6f31d57d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0039569a8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:59:08.976: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3405 f03e5b72-80ac-4034-b1fb-98231f8c2de0 11075 2 2023-01-03 14:58:54 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0cd8b611-19f9-4076-8729-f1a6f31d57d1 0xc003956c27 0xc003956c28}] [] [{kube-controller-manager Update apps/v1 2023-01-03 14:58:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cd8b611-19f9-4076-8729-f1a6f31d57d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003956cc8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 3 14:59:08.981: INFO: Pod "test-rollover-deployment-668db69979-fql98" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-fql98 test-rollover-deployment-668db69979- deployment-3405 04006e3a-d86a-4924-8f56-86e4ebf3c237 11084 0 2023-01-03 14:58:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 973db087-150a-4f6c-b1e6-78c64186e7c7 0xc003957197 0xc003957198}] [] [{kube-controller-manager Update v1 2023-01-03 14:58:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"973db087-150a-4f6c-b1e6-78c64186e7c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-03 14:58:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.51\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ts4fv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ts4fv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ts4fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-1wcp0z-worker-u044o2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:58:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:58:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:58:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-03 14:58:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.51,StartTime:2023-01-03 14:58:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-03 14:58:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://9130902f89ba500f3e7f888971ed49d58baa50633cf7ffe013ab484f63e72f42,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:08.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3405" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":16,"skipped":435,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:09.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:59:11.098: INFO: DNS probes using dns-9517/dns-test-1c45ea1e-979a-4617-bd79-13dd07feab1d succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:11.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9517" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":17,"skipped":444,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:11.197: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:59:13.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.300: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.320: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.327: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.331: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.336: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:13.347: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:18.360: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:18.366: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:18.384: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:18.388: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:18.395: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:23.360: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:23.365: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:23.382: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:23.385: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:23.392: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:28.359: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:28.363: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:28.383: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:28.386: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:28.392: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:33.361: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:33.365: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:33.385: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:33.389: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:33.397: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:38.363: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:38.367: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:38.386: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:38.389: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local from pod dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435: the server could not find the requested resource (get pods dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435) Jan 3 14:59:38.396: INFO: Lookups using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 failed for: [wheezy_udp@dns-test-service-2.dns-8701.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8701.svc.cluster.local jessie_udp@dns-test-service-2.dns-8701.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8701.svc.cluster.local] Jan 3 14:59:43.394: INFO: DNS probes using dns-8701/dns-test-b57cde02-9a95-48fc-ae0f-a68397c55435 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:43.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8701" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":482,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:43.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-94553271-1600-43d3-87e4-41ff843dd928 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 14:59:43.521: INFO: Waiting up to 5m0s for pod "pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f" in namespace "secrets-2270" to be "Succeeded or Failed" Jan 3 14:59:43.525: INFO: Pod "pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.494421ms Jan 3 14:59:45.529: INFO: Pod "pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007324772s �[1mSTEP�[0m: Saw pod success Jan 3 14:59:45.529: INFO: Pod "pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f" satisfied condition "Succeeded or Failed" Jan 3 14:59:45.531: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 3 14:59:45.552: INFO: Waiting for pod pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f to disappear Jan 3 14:59:45.556: INFO: Pod pod-secrets-e4500fcc-6a7f-4171-a4a9-3862d9160f7f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:45.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2270" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":486,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:45.603: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 3 14:59:48.176: INFO: Successfully updated pod "pod-update-21163e2c-46e7-4818-bbae-d4071489f35d" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 3 14:59:48.183: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:48.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5760" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":503,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:54:52.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics W0103 14:54:53.421250 14 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 3 14:59:53.435: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4747" for this suite. �[32m• [SLOW TEST:301.120 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should delete RS created by deployment when not orphaning [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":66,"skipped":1097,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:53.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 14:59:53.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-8952" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":67,"skipped":1111,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:53.656: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 3 14:59:56.243: INFO: Successfully updated pod "annotationupdate45f72c74-904a-40fe-a4ab-b8944a9b0ee4" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:00.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1114,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:00.296: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 3 15:00:00.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5" in namespace "projected-966" to be "Succeeded or Failed" Jan 3 15:00:00.346: INFO: Pod "downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116354ms Jan 3 15:00:02.350: INFO: Pod "downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008341958s �[1mSTEP�[0m: Saw pod success Jan 3 15:00:02.351: INFO: Pod "downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5" satisfied condition "Succeeded or Failed" Jan 3 15:00:02.354: INFO: Trying to get logs from node k8s-upgrade-and-conformance-1wcp0z-worker-u044o2 pod downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 3 15:00:02.371: INFO: Waiting for pod downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5 to disappear Jan 3 15:00:02.375: INFO: Pod downwardapi-volume-c39c5110-8dde-46e2-98fa-6206e60f3da5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:02.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-966" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1127,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":88,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:55:04.008: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5505.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5505.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5505.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5505.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 3 14:58:40.804: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd: an error on the server ("unknown") has prevented the request from succeeding (get pods dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd) Jan 3 15:00:06.074: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5505/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-5505/pods/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00388b200, 0xc0035d7df8, 0xc00388b200, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e980, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc003b3fc00, 0x56112e0, 0xc000e346e0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc003b3fc00, 0xc00389e980, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 E0103 15:00:06.075001 16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 3 15:00:06.074: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5505/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-5505/pods/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00388b200, 0xc0035d7df8, 0xc00388b200, 0xc0035d7df8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e980, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc003b3fc00, 0x56112e0, 0xc000e346e0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc003b3fc00, 0xc00389e980, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc0019dd680, 0x4fc9940)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 110 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x499f1e0, 0xc003802180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x499f1e0, 0xc003802180) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0032de500, 0x12f, 0x77a462c, 0x7d, 0xd3, 0xc003264000, 0x7fb) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0032de500, 0x12f, 0xc0035d78a0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0032de500, 0x12f, 0xc0035d7988, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e68bfb, 0x24, 0xc0035d7be8, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:481 +0xa6d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0035d7df8, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00388b200, 0xc0035d7df8, 0xc00388b200, 0xc0035d7df8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0035d7df8, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00389e980, 0x8, 0x8, 0x4dccbe5, 0x7, 0xc003b3fc00, 0x56112e0, 0xc000e346e0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:458 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011ca000, 0xc003b3fc00, 0xc00389e980, 0x8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:521 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:126 +0x62a k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000678840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001124a40, 0x54fc2e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001e6db30, 0x0, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001e6db30, 0x54fc2e0, 0xc00015a8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001e1d680, 0xc001e6db30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001e1d680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001e1d680, 0xc002883310) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000148230, 0x7f39964ef6c0, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0023ede90, 0x3, 0x3, 0x55b68a0, 0xc00015a8c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc00143a280, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x5500f20, 0xc0019dd680, 0x4e003e0, 0x14, 0xc0007d3160, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0019dd680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0019dd680, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:06.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5505" for this suite. �[91m�[1m• Failure [302.093 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 3 15:00:06.074: Unable to read wheezy_hosts@dns-querier-1 from pod dns-5505/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-5505/pods/dns-test-b59d30a6-d77d-4fd6-b537-97f58b6d23fd/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":88,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:06.177: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 3 15:00:07.229: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:07.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-6963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":129,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:02.394: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 3 15:00:02.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3637 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 3 15:00:02.540: INFO: stderr: "" Jan 3 15:00:02.540: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 3 15:00:07.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3637 get pod e2e-test-httpd-pod -o json' Jan 3 15:00:07.681: INFO: stderr: "" Jan 3 15:00:07.681: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-03T15:00:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-03T15:00:02Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"192.168.6.56\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-03T15:00:03Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3637\",\n \"resourceVersion\": \"11512\",\n \"uid\": \"69b51305-6bbc-4482-a442-7cf22ff89949\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-px6mq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-1wcp0z-worker-u044o2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-px6mq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-px6mq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-03T15:00:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-03T15:00:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-03T15:00:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-03T15:00:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://656810213fbac336b7c2bd315b5118d7a083288033347797ca82aadb3cee80a8\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-03T15:00:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.6.56\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.6.56\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-03T15:00:02Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 3 15:00:07.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3637 replace -f -' Jan 3 15:00:08.803: INFO: stderr: "" Jan 3 15:00:08.803: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 3 15:00:08.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3637 delete pods e2e-test-httpd-pod' Jan 3 15:00:10.840: INFO: stderr: "" Jan 3 15:00:10.840: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:10.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3637" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":70,"skipped":1133,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:10.882: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 15:00:10.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 create -f -' Jan 3 15:00:11.232: INFO: stderr: "" Jan 3 15:00:11.232: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 3 15:00:11.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 create -f -' Jan 3 15:00:11.583: INFO: stderr: "" Jan 3 15:00:11.583: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 3 15:00:12.587: INFO: Selector matched 1 pods for map[app:agnhost] Jan 3 15:00:12.587: INFO: Found 1 / 1 Jan 3 15:00:12.587: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 3 15:00:12.591: INFO: Selector matched 1 pods for map[app:agnhost] Jan 3 15:00:12.591: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 3 15:00:12.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 describe pod agnhost-primary-b2wx5' Jan 3 15:00:12.705: INFO: stderr: "" Jan 3 15:00:12.705: INFO: stdout: "Name: agnhost-primary-b2wx5\nNamespace: kubectl-8973\nPriority: 0\nNode: k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6/172.18.0.7\nStart Time: Tue, 03 Jan 2023 15:00:11 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.84\nIPs:\n IP: 192.168.1.84\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://ca73e1dcfa2395b6351ec3d10bc03e5ceea865cfc1605e1b1de83c675d5c1573\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 03 Jan 2023 15:00:12 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-g4lg9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-g4lg9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-g4lg9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-8973/agnhost-primary-b2wx5 to k8s-upgrade-and-conformance-1wcp0z-md-0-5sqwg-5bdbcf68f6-t4mw6\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" Jan 3 15:00:12.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 describe rc agnhost-primary' Jan 3 15:00:12.838: INFO: stderr: "" Jan 3 15:00:12.838: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8973\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-b2wx5\n" Jan 3 15:00:12.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 describe service agnhost-primary' Jan 3 15:00:13.014: INFO: stderr: "" Jan 3 15:00:13.014: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8973\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: <none>\nIP: 10.136.188.82\nIPs: 10.136.188.82\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.1.84:6379\nSession Affinity: None\nEvents: <none>\n" Jan 3 15:00:13.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 describe node k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf' Jan 3 15:00:13.158: INFO: stderr: "" Jan 3 15:00:13.158: INFO: stdout: "Name: k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-1wcp0z\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-z6xd2e\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-1wcp0z-g74qf\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 03 Jan 2023 14:31:23 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\n AcquireTime: <unset>\n RenewTime: Tue, 03 Jan 2023 15:00:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 03 Jan 2023 14:57:10 +0000 Tue, 03 Jan 2023 14:31:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 03 Jan 2023 14:57:10 +0000 Tue, 03 Jan 2023 14:31:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 03 Jan 2023 14:57:10 +0000 Tue, 03 Jan 2023 14:31:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 03 Jan 2023 14:57:10 +0000 Tue, 03 Jan 2023 14:32:04 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: 6314032f87b04e328fd5402af1046246\n System UUID: 2b16cdfd-9cd9-4fac-9bab-5dab7132518a\n Boot ID: 3b87e223-b376-40c9-b368-d6be257833d3\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 22.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9\n Kubelet Version: v1.20.15\n Kube-Proxy Version: v1.20.15\nPodCIDR: 192.168.5.0/24\nPodCIDRs: 192.168.5.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 28m\n kube-system kindnet-4qtll 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 28m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf 250m (3%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf 200m (2%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system kube-proxy-cqrcp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m\n kube-system kube-scheduler-k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf 100m (1%) 0 (0%) 0 (0%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (9%) 100m (1%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 28m kubelet Starting kubelet.\n Warning InvalidDiskCapacity 28m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf status is now: NodeHasSufficientPID\n Warning CheckLimitsForResolvConf 28m kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Normal NodeAllocatableEnforced 28m kubelet Updated Node Allocatable limit across pods\n Normal Starting 28m kube-proxy Starting kube-proxy.\n Normal NodeReady 28m kubelet Node k8s-upgrade-and-conformance-1wcp0z-g74qf-mbqkf status is now: NodeReady\n Normal Starting 24m kube-proxy Starting kube-proxy.\n" Jan 3 15:00:13.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8973 describe namespace kubectl-8973' Jan 3 15:00:13.265: INFO: stderr: "" Jan 3 15:00:13.265: INFO: stdout: "Name: kubectl-8973\nLabels: e2e-framework=kubectl\n e2e-run=c80a289b-e707-4ca9-bcec-aa87f29054a8\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:13.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8973" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":71,"skipped":1145,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:13.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 15:00:13.316: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-fc9ce2bc-f7bf-4776-a1cd-70e1a3dfee3e" in namespace "security-context-test-5691" to be "Succeeded or Failed" Jan 3 15:00:13.320: INFO: Pod "busybox-readonly-false-fc9ce2bc-f7bf-4776-a1cd-70e1a3dfee3e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744813ms Jan 3 15:00:15.327: INFO: Pod "busybox-readonly-false-fc9ce2bc-f7bf-4776-a1cd-70e1a3dfee3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010099962s Jan 3 15:00:15.327: INFO: Pod "busybox-readonly-false-fc9ce2bc-f7bf-4776-a1cd-70e1a3dfee3e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:15.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-5691" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1145,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:15.422: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Jan 3 15:00:18.495: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:19.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-5371" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":73,"skipped":1194,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:19.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 3 15:00:19.575: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7825 121230d6-cf1e-4cf7-ad99-8178da97c815 11758 0 2023-01-03 15:00:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-03 15:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 3 15:00:19.575: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7825 121230d6-cf1e-4cf7-ad99-8178da97c815 11759 0 2023-01-03 15:00:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-03 15:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 3 15:00:19.599: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7825 121230d6-cf1e-4cf7-ad99-8178da97c815 11761 0 2023-01-03 15:00:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-03 15:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 3 15:00:19.599: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7825 121230d6-cf1e-4cf7-ad99-8178da97c815 11762 0 2023-01-03 15:00:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-03 15:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:19.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7825" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":74,"skipped":1195,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:19.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 15:00:21.669: INFO: Deleting pod "var-expansion-bd3aa392-4f69-406d-b453-f1f1c4ed03dc" in namespace "var-expansion-7117" Jan 3 15:00:21.676: INFO: Wait up to 5m0s for pod "var-expansion-bd3aa392-4f69-406d-b453-f1f1c4ed03dc" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:25.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7117" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":75,"skipped":1198,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:25.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 3 15:00:28.346: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3042 pod-service-account-9114cf07-b578-4076-a0c5-194420bd5007 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 3 15:00:28.517: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3042 pod-service-account-9114cf07-b578-4076-a0c5-194420bd5007 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 3 15:00:28.692: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3042 pod-service-account-9114cf07-b578-4076-a0c5-194420bd5007 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:28.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3042" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":76,"skipped":1252,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 14:59:48.197: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-6096, will wait for the garbage collector to delete the pods Jan 3 14:59:50.303: INFO: Deleting Job.batch foo took: 6.558569ms Jan 3 14:59:50.403: INFO: Terminating Job.batch foo pods took: 100.349646ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:30.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-6096" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":21,"skipped":504,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:30.174: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 3 15:00:32.743: INFO: Successfully updated pod "labelsupdatec52d11ba-38fd-4925-8e46-fa75356b50bf" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:36.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4907" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":539,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:36.851: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:36.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2578" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":23,"skipped":593,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:28.946: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating server pod server in namespace prestop-4663 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-4663 �[1mSTEP�[0m: Deleting pre-stop pod Jan 3 15:00:38.015: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:38.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-4663" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":77,"skipped":1291,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:36.929: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 3 15:00:43.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3810" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":24,"skipped":607,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 15:00:44.029: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 3 15:00:44.067: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-5953 I0103 15:00:44.083501 20 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5953, replica count: 1 I0103 15:00:45.133972 20 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0103 15:00:46.134302 20 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 15:00:46.245: INFO: Created: latency-svc-8jvp5 Jan 3 15:00:46.260: INFO: Got endpoints: latency-svc-8jvp5 [25.973008ms] Jan 3 15:00:46.290: INFO: Created: latency-svc-qrwtg Jan 3 15:00:46.294: INFO: Created: latency-svc-g2zpc Jan 3 15:00:46.296: INFO: Got endpoints: latency-svc-qrwtg [35.907392ms] Jan 3 15:00:46.300: INFO: Got endpoints: latency-svc-g2zpc [39.439393ms] Jan 3 15:00:46.313: INFO: Created: latency-svc-ztkdp Jan 3 15:00:46.320: INFO: Got endpoints: latency-svc-ztkdp [59.678535ms] Jan 3 15:00:46.329: INFO: Created: latency-svc-bp84l Jan 3 15:00:46.355: INFO: Created: latency-svc-vllst Jan 3 15:00:46.361: INFO: Got endpoints: latency-svc-bp84l [98.703215ms] Jan 3 15:00:46.367: INFO: Got endpoints: latency-svc-vllst [105.46783ms] Jan 3 15:00:46.376: INFO: Created: latency-svc-5bnsl Jan 3 15:00:46.390: INFO: Got endpoints: latency-svc-5bnsl [128.488513ms] Jan 3 15:00:46.394: INFO: Created: latency-svc-2fg7c Jan 3 15:00:46.401: INFO: Got endpoints: latency-svc-2fg7c [138.703004ms] Jan 3 15:00:46.408: INFO: Created: latency-svc-9k2p8 Jan 3 15:00:46.418: INFO: Created: latency-svc-9gjs4 Jan 3 15:00:46.424: INFO: Got endpoints: latency-svc-9k2p8 [163.047699ms] Jan 3 15:00:46.425: INFO: Created: latency-svc-rlxzf Jan 3 15:00:46.429: INFO: Got endpoints: latency-svc-9gjs4 [167.1082ms] Jan 3 15:00:46.430: INFO: Got endpoints: latency-svc-rlxzf [169.334313ms] Jan 3 15:00:46.437: INFO: Created: latency-svc-f2gxr Jan 3 15:00:46.447: INFO: Got endpoints: latency-svc-f2gxr [186.451414ms] Jan 3 15:00:46.455: INFO: Created: latency-svc-m28jc Jan 3 15:00:46.462: INFO: Got endpoints: latency-svc-m28jc [200.930056ms] Jan 3 15:00:46.464: INFO: Created: latency-svc-k2s7j Jan 3 15:00:46.469: INFO: Got endpoints: latency-svc-k2s7j [207.306776ms] Jan 3 15:00:46.472: INFO: Created: latency-svc-c5drz Jan 3 15:00:46.476: INFO: Got endpoints: latency-svc-c5drz [215.333562ms] Jan 3 15:00:46.485: INFO: Created: latency-svc-dh2xn Jan 3 15:00:46.497: INFO: Got endpoints: latency-svc-dh2xn [234.761247ms] Jan 3 15:00:46.501: INFO: Created: latency-svc-4kvft Jan 3 15:00:46.510: INFO: Created: latency-svc-qql9d Jan 3 15:00:46.511: INFO: Got endpoints: latency-svc-4kvft [214.351495ms] Jan 3 15:00:46.516: INFO: Got endpoints: latency-svc-qql9d [215.320072ms] Jan 3 15:00:46.523: INFO: Created: latency-svc-h5cfs Jan 3 15:00:46.533: INFO: Created: latency-svc-xlxrp Jan 3 15:00:46.533: INFO: Got endpoints: latency-svc-h5cfs [212.830639ms] Jan 3 15:00:46.542: INFO: Got endpoints: latency-svc-xlxrp [181.193043ms] Jan 3 15:00:46.543: INFO: Created: latency-svc-98s74 Jan 3 15:00:46.547: INFO: Got endpoints: latency-svc-98s74 [180.257012ms] Jan 3 15:00:46.559: INFO: Created: latency-svc-mq42f Jan 3 15:00:46.567: INFO: Got endpoints: latency-svc-mq42f [177.043558ms] Jan 3 15:00:46.569: INFO: Created: latency-svc-xvw2v Jan 3 15:00:46.577: INFO: Got endpoints: latency-svc-xvw2v [175.388524ms] Jan 3 15:00:46.585: INFO: Created: latency-svc-d76m2 Jan 3 15:00:46.589: INFO: Got endpoints: latency-svc-d76m2 [165.530041ms] Jan 3 15:00:46.607: INFO: Created: latency-svc-mrch6 Jan 3 15:00:46.615: INFO: Got endpoints: latency-svc-mrch6 [186.412723ms] Jan 3 15:00:46.616: INFO: Created: latency-svc-m2tvw Jan 3 15:00:46.626: INFO: Got endpoints: latency-svc-m2tvw [196.095484ms] Jan 3 15:00:46.629: INFO: Created: latency-svc-hsvnp Jan 3 15:00:46.642: INFO: Got endpoints: latency-svc-hsvnp [194.075515ms] Jan 3 15:00:46.658: INFO: Created: latency-svc-c9gwj Jan 3 15:00:46.671: INFO: Got endpoints: latency-svc-c9gwj [208.595853ms] Jan 3 15:00:46.675: INFO: Created: latency-svc-5t4sp Jan 3 15:00:46.686: INFO: Created: latency-svc-gzrk7 Jan 3 15:00:46.686: INFO: Got endpoints: latency-svc-5t4sp [217.414342ms] Jan 3 15:00:46.696: INFO: Got endpoints: latency-svc-gzrk7 [219.857425ms] Jan 3 15:00:46.697: INFO: Created: latency-svc-kx2xv Jan 3 15:00:46.704: INFO: Got endpoints: latency-svc-kx2xv [206.742581ms] Jan 3 15:00:46.715: INFO: Created: latency-svc-xpjr9 Jan 3 15:00:46.722: INFO: Got endpoints: latency-svc-xpjr9 [211.18676ms] Jan 3 15:00:46.725: INFO: Created: latency-svc-lffdr Jan 3 15:00:46.731: INFO: Got endpoints: latency-svc-lffdr [214.542141ms] Jan 3 15:00:46.734: INFO: Created: latency-svc-zsf8w Jan 3 15:00:46.737: INFO: Got endpoints: latency-svc-zsf8w [203.91315ms] Jan 3 15:00:46.743: INFO: Created: latency-svc-v7jd5 Jan 3 15:00:46.752: INFO: Got endpoints: latency-svc-v7jd5 [209.686685ms] Jan 3 15:00:46.754: INFO: Created: latency-svc-jcscz Jan 3 15:00:46.761: INFO: Got endpoints: latency-svc-jcscz [212.924636ms] Jan 3 15:00:46.765: INFO: Created: latency-svc-9bpz6 Jan 3 15:00:46.778: INFO: Got endpoints: latency-svc-9bpz6 [211.103204ms] Jan 3 15:00:46.781: INFO: Created: latency-svc-b9p2p Jan 3 15:00:46.788: INFO: Created: latency-svc-mln2v Jan 3 15:00:46.788: INFO: Got endpoints: latency-svc-b9p2p [211.777517ms] Jan 3 15:00:46.803: INFO: Created: latency-svc-rllkx Jan 3 15:00:46.803: INFO: Got endpoints: latency-svc-mln2v [213.341363ms] Jan 3 15:00:46.806: INFO: Got endpoints: latency-svc-rllkx [191.119355ms] Jan 3 15:00:46.814: INFO: Created: latency-svc-vnvhf Jan 3 15:00:46.826: INFO: Got endpoints: latency-svc-vnvhf [198.932261ms] Jan 3 15:00:46.832: INFO: Created: latency-svc-mpm54 Jan 3 15:00:46.837: INFO: Created: latency-svc-l5h7k Jan 3 15:00:46.852: INFO: Created: latency-svc-fk6sh Jan 3 15:00:46.855: INFO: Got endpoints: latency-svc-mpm54 [213.142034ms] Jan 3 15:00:46.857: INFO: Created: latency-svc-xxwh2 Jan 3 15:00:46.868: INFO: Created: latency-svc-z4hps Jan 3 15:00:46.874: INFO: Created: latency-svc-wn5l2 Jan 3 15:00:46.880: INFO: Created: latency-svc-2ssqp Jan 3 15:00:46.892: INFO: Created: latency-svc-2bg8m Jan 3 15:00:46.905: INFO: Got endpoints: latency-svc-l5h7k [233.854164ms] Jan 3 15:00:46.907: INFO: Created: latency-svc-xz976 Jan 3 15:00:46.917: INFO: Created: latency-svc-z8l5d Jan 3 15:00:46.926: INFO: Created: latency-svc-xgqxp Jan 3 15:00:46.938: INFO: Created: latency-svc-bbnck Jan 3 15:00:46.946: INFO: Created: latency-svc-4qjlf Jan 3 15:00:46.952: INFO: Got endpoints: latency-svc-fk6sh [265.398273ms] Jan 3 15:00:46.958: INFO: Created: latency-svc-5cxhs Jan 3 15:00:46.966: INFO: Created: latency-svc-ntmkc Jan 3 15:00:46.975: INFO: Created: latency-svc-6hzk6 Jan 3 15:00:46.984: INFO: Created: latency-svc-8flvl Jan 3 15:00:46.991: INFO: Created: latency-svc-bwrjr Jan 3 15:00:47.002: INFO: Got endpoints: latency-svc-xxwh2 [305.660004ms] Jan 3 15:00:47.014: INFO: Created: latency-svc-82xbg Jan 3 15:00:47.050: INFO: Got endpoints: latency-svc-z4hps [346.013399ms] Jan 3 15:00:47.063: INFO: Created: latency-svc-9tldn Jan 3 15:00:47.100: INFO: Got endpoints: latency-svc-wn5l2 [378.208298ms] Jan 3 15:00:47.118: INFO: Created: latency-svc-wsdgn Jan 3 15:00:47.152: INFO: Got endpoints: latency-svc-2ssqp [421.349662ms] Jan 3 15:00:47.167: INFO: Created: latency-svc-nnctq Jan 3 15:00:47.202: INFO: Got endpoints: latency-svc-2bg8m [464.658169ms] Jan 3 15:00:47.220: INFO: Created: latency-svc-tpdw5 Jan 3 15:00:47.251: INFO: Got endpoints: latency-svc-xz976 [499.245044ms] Jan 3 15:00:47.286: INFO: Created: latency-svc-xnfsw Jan 3 15:00:47.303: INFO: Got endpoints: latency-svc-z8l5d [542.14756ms] Jan 3 15:00:47.321: INFO: Created: latency-svc-6gm8w Jan 3 15:00:47.351: INFO: Got endpoints: latency-svc-xgqxp [573.145025ms] Jan 3 15:00:47.379: INFO: Created: latency-svc-c4s8m Jan 3 15:00:47.401: INFO: Got endpoints: latency-svc-bbnck [612.636517ms] Jan 3 15:00:47.416: INFO: Created: latency-svc-bwl4j Jan 3 15:00:47.455: INFO: Got endpoints: latency-svc-4qjlf [650.976696ms] Jan 3 15:00:47.478: INFO: Created: latency-svc-c2dbj Jan 3 15:00:47.504: INFO: Got endpoints: latency-svc-5cxhs [697.785803ms] Jan 3 15:00:47.520: INFO: Created: latency-svc-8crx9 Jan 3 15:00:47.552: INFO: Got endpoints: latency-svc-ntmkc [726.363281ms] Jan 3 15:00:47.568: INFO: Created: latency-svc-j52jz Jan 3 15:00:47.601: INFO: Got endpoints: latency-svc-6hzk6 [745.838714ms] Jan 3 15:00:47.616: INFO: Created: latency-svc-d4wqm Jan 3 15:00:47.651: INFO: Got endpoints: latency-svc-8flvl [746.377461ms] Jan 3 15:00:47.665: INFO: Created: latency-svc-t6gth Jan 3 15:00:47.701: INFO: Got endpoints: latency-svc-bwrjr [748.961687ms] Jan 3 15:00:47.717: INFO: Created: latency-svc-78fcq Jan 3 15:00:47.752: INFO: Got endpoints: latency-svc-82xbg [748.712715ms] Jan 3 15:00:47.763: INFO: Created: latency-svc-6zn9c Jan 3 15:00:47.807: INFO: Got endpoints: latency-svc-9tldn [757.387696ms] Jan 3 15:00:47.823: INFO: Created: latency-svc-j979t Jan 3 15:00:47.850: INFO: Got endpoints: latency-svc-wsdgn [750.066984ms] Jan 3 15:00:47.864: INFO: Created: latency-svc-wfdxk Jan 3 15:00:47.903: INFO: Got endpoints: latency-svc-nnctq [750.213629ms] Jan 3 15:00:47.916: INFO: Created: latency-svc-gtzpt Jan 3 15:00:47.953: INFO: Got endpoints: latency-svc-tpdw5 [750.367541ms] Jan 3 15:00:47.972: INFO: Created: latency-svc-87pvn Jan 3 15:00:48.000: INFO: Got endpoints: latency-svc-xnfsw [749.139626ms] Jan 3 15:00:48.020: INFO: Created: latency-svc-t9nnk Jan 3 15:00:48.051: INFO: Got endpoints: latency-svc-6gm8w [747.441246ms] Jan 3 15:00:48.066: INFO: Created: latency-svc-tgnjt Jan 3 15:00:48.100: INFO: Got endpoints: latency-svc-c4s8m [749.067072ms] Jan 3 15:00:48.119: INFO: Created: latency-svc-btwbc Jan 3 15:00:48.150: INFO: Got endpoints: latency-svc-bwl4j [748.806975ms] Jan 3 15:00:48.162: INFO: Created: latency-svc-dxd5q Jan 3 15:00:48.203: INFO: Got endpoints: latency-svc-c2dbj [748.091585ms] Jan 3 15:00:48.215: INFO: Created: latency-svc-v2zfr Jan 3 15:00:48.249: INFO: Got endpoints: latency-svc-8crx9 [744.828097ms] Jan 3 15:00:48.269: INFO: Created: latency-svc-dqsns Jan 3 15:00:48.303: INFO: Got endpoints: latency-svc-j52jz [750.217393ms] Jan 3 15:00:48.340: INFO: Created: latency-svc-kxsmg Jan 3 15:00:48.351: INFO: Got endpoints: latency-svc-d4wqm [750.15979ms] Jan 3 15:00:48.367: INFO: Created: latency-svc-56z88 Jan 3 15:00:48.400: INFO: Got endpoints: latency-svc-t6gth [748.416287ms] Jan 3 15:00:48.415: INFO: Created: latency-svc-n58bx Jan 3 15:00:48.450: INFO: Got endpoints: latency-svc-78fcq [748.864982ms] Jan 3 15:00:48.463: INFO: Created: latency-svc-l28s8 Jan 3 15:00:48.500: INFO: Got endpoints: latency-svc-6zn9c [748.018793ms] Jan 3 15:00:48.518: INFO: Created: latency-svc-q4q7g Jan 3 15:00:48.552: INFO: Got endpoints: latency-svc-j979t [744.922709ms] Jan 3 15:00:48.565: INFO: Created: latency-svc-n7zw5 Jan 3 15:00:48.600: INFO: Got endpoints: latency-svc-wfdxk [749.371467ms] Jan 3 15:00:48.613: INFO: Created: latency-svc-vwwlq Jan 3 15:00:48.666: INFO: Got endpoints: latency-svc-gtzpt [763.329431ms] Jan 3 15:00:48.687: INFO: Created: latency-svc-wzbb9 Jan 3 15:00:48.702: INFO: Got endpoints: latency-svc-87pvn [749.322558ms] Jan 3 15:00:48.714: INFO: Created: latency-svc-9rfnh Jan 3 15:00:48.753: INFO: Got endpoints: latency-svc-t9nnk [753.26684ms] Jan 3 15:00:48.766: INFO: Created: latency-svc-wtjlh Jan 3 15:00:48.802: INFO: Got endpoints: latency-svc-tgnjt [750.896323ms] Jan 3 15:00:48.813: INFO: Created: latency-svc-xjk79 Jan 3 15:00:48.852: INFO: Got endpoints: latency-svc-btwbc [751.348641ms] Jan 3 15:00:48.874: INFO: Created: latency-svc-qctbd Jan 3 15:00:48.900: INFO: Got endpoints: latency-svc-dxd5q [749.249862ms] Jan 3 15:00:48.913: INFO: Created: latency-svc-6nh6j Jan 3 15:00:48.953: INFO: Got endpoints: latency-svc-v2zfr [750.051977ms] Jan 3 15:00:48.968: INFO: Created: latency-svc-jgb9q Jan 3 15:00:49.000: INFO: Got endpoints: latency-svc-dqsns [750.55297ms] Jan 3 15:00:49.015: INFO: Created: latency-svc-k6vqz Jan 3 15:00:49.049: INFO: Got endpoints: latency-svc-kxsmg [746.751666ms] Jan 3 15:00:49.061: INFO: Created: latency-svc-9h5bw Jan 3 15:00:49.103: INFO: Got endpoints: latency-svc-56z88 [751.237769ms] Jan 3 15:00:49.114: INFO: Created: latency-svc-5hf4n Jan 3 15:00:49.155: INFO: Got endpoints: latency-svc-n58bx [755.205193ms] Jan 3 15:00:49.166: INFO: Created: latency-svc-htpzj Jan 3 15:00:49.199: INFO: Got endpoints: latency-svc-l28s8 [749.856762ms] Jan 3 15:00:49.212: INFO: Created: latency-svc-pxlhv Jan 3 15:00:49.251: INFO: Got endpoints: latency-svc-q4q7g [751.110427ms] Jan 3 15:00:49.271: INFO: Created: latency-svc-9j8fl Jan 3 15:00:49.305: INFO: Got endpoints: latency-svc-n7zw5 [752.552479ms] Jan 3 15:00:49.336: INFO: Created: latency-svc-98z7c Jan 3 15:00:49.351: INFO: Got endpoints: latency-svc-vwwlq [751.683434ms] Jan 3 15:00:49.365: INFO: Created: latency-svc-88pls Jan 3 15:00:49.400: INFO: Got endpoints: latency-svc-wzbb9 [734.059851ms] Jan 3 15:00:49.412: INFO: Created: latency-svc-6mtps Jan 3 15:00:49.450: INFO: Got endpoints: latency-svc-9rfnh [747.960169ms] Jan 3 15:00:49.463: INFO: Created: latency-svc-5ddmz Jan 3 15:00:49.500: INFO: Got endpoints: latency-svc-wtjlh [746.849549ms] Jan 3 15:00:49.511: INFO: Created: latency-svc-jzdcf Jan 3 15:00:49.550: INFO: Got endpoints: latency-svc-xjk79 [748.583919ms] Jan 3 15:00:49.565: INFO: Created: latency-svc-7777w Jan 3 15:00:49.600: INFO: Got endpoints: latency-svc-qctbd [748.116182ms] Jan 3 15:00:49.616: INFO: Created: latency-svc-t2sbs Jan 3 15:00:49.652: INFO: Got endpoints: latency-svc-6nh6j [752.859173ms] Jan 3 15:00:49.665: INFO: Created: latency-svc-n5zsx Jan 3 15:00:49.700: INFO: Got endpoints: latency-svc-jgb9q [747.189994ms] Jan 3 15:00:49.715: INFO: Created: latency-svc-8jssc Jan 3 15:00:49.750: INFO: Got endpoints: latency-svc-k6vqz [749.709722ms] Jan 3 15:00:49.763: INFO: Created: latency-svc-gpt5w Jan 3 15:00:49.803: INFO: Got endpoints: latency-svc-9h5bw [753.073321ms] Jan 3 15:00:49.819: INFO: Created: latency-svc-d4xqk Jan 3 15:00:49.855: INFO: Got endpoints: latency-svc-5hf4n [751.959147ms] Jan 3 15:00:49.866: INFO: Created: latency-svc-7hlwc Jan 3 15:00:49.901: INFO: Got endpoints: latency-svc-htpzj [745.512593ms] Jan 3 15:00:49.911: INFO: Created: latency-svc-5jpn8 Jan 3 15:00:49.951: INFO: Got endpoints: latency-svc-pxlhv [751.671097ms] Jan 3 15:00:49.967: INFO: Created: latency-svc-lrz58 Jan 3 15:00:50.000: INFO: Got endpoints: latency-svc-9j8fl [749.049546ms] Jan 3 15:00:50.016: INFO: Created: latency-svc-45dnn Jan 3 15:00:50.054: INFO: Got endpoints: latency-svc-98z7c [749.112812ms] Jan 3 15:00:50.075: INFO: Created: latency-svc-wmdjf Jan 3 15:00:50.103: INFO: Got endpoints: latency-svc-88pls [751.350788ms] Jan 3 15:00:50.120: INFO: Created: latency-svc-zc57b Jan 3 15:00:50.151: INFO: Got endpoints: latency-svc-6mtps [750.939543ms] Jan 3 15:00:50.162: INFO: Created: latency-svc-zgz5v Jan 3 15:00:50.202: INFO: Got endpoints: latency-svc-5ddmz [751.778023ms] Jan 3 15:00:50.215: INFO: Created: latency-svc-kjd7w Jan 3 15:00:50.251: INFO: Got endpoints: latency-svc-jzdcf [750.045872ms] Jan 3 15:00:50.270: INFO: Created: latency-svc-qpd47 Jan 3 15:00:50.303: INFO: Got endpoints: latency-svc-7777w [752.501037ms] Jan 3 15:00:50.323: INFO: Created: latency-svc-pxr52 Jan 3 15:00:50.357: INFO: Got endpoints: latency-svc-t2sbs [756.876295ms] Jan 3 15:00:50.377: INFO: Created: latency-svc-jkjcx Jan 3 15:00:50.400: INFO: Got endpoints: latency-svc-n5zsx [747.874843ms] Jan 3 15:00:50.417: INFO: Created: latency-svc-hqzc5 Jan 3 15:00:50.458: INFO: Got endpoints: latency-svc-8jssc [757.745313ms] Jan 3 15:00:50.472: INFO: Created: latency-svc-bbtnx Jan 3 15:00:50.499: INFO: Got endpoints: latency-svc-gpt5w [749.793863ms] Jan 3 15:00:50.512: INFO: Created: latency-svc-bj9vz Jan 3 15:00:50.550: INFO: Got endpoints: latency-svc-d4xqk [747.053968ms] Jan 3 15:00:50.573: INFO: Created: latency-svc-646g4 Jan 3 15:00:50.604: INFO: Got endpoints: latency-svc-7hlwc [749.408226ms] Jan 3 15:00:50.618: INFO: Created: latency-svc-hscv6 Jan 3 15:00:50.650: INFO: Got endpoints: latency-svc-5jpn8 [749.709374ms] Jan 3 15:00:50.664: INFO: Created: latency-svc-nn7zv Jan 3 15:00:50.700: INFO: Got endpoints: latency-svc-lrz58 [748.975649ms] Jan 3 15:00:50.713: INFO: Created: latency-svc-rf9fx Jan 3 15:00:50.750: INFO: Got endpoints: latency-svc-45dnn [749.983344ms] Jan 3 15:00:50.762: INFO: Created: latency-svc-x68vl Jan 3 15:00:50.800: INFO: Got endpoints: latency-svc-wmdjf [745.413024ms] Jan 3 15:00:50.813: INFO: Created: latency-svc-q4d72 Jan 3 15:00:50.851: INFO: Got endpoints: latency-svc-zc57b [747.946334ms] Jan 3 15:00:50.862: INFO: Created: latency-svc-87kzg Jan 3 15:00:50.900: INFO: Got endpoints: latency-svc-zgz5v [748.844074ms] Jan 3 15:00:50.917: INFO: Created: latency-svc-rfthq Jan 3 15:00:50.950: INFO: Got endpoints: latency-svc-kjd7w [748.309134ms] Jan 3 15:00:50.963: INFO: Created: latency-svc-qmd6w Jan 3 15:00:51.001: INFO: Got endpoints: latency-svc-qpd47 [750.111366ms] Jan 3 15:00:51.014: INFO: Created: latency-svc-4d8p4 Jan 3 15:00:51.052: INFO: Got endpoints: latency-svc-pxr52 [748.969125ms] Jan 3 15:00:51.064: INFO: Created: latency-svc-rp8xg Jan 3 15:00:51.100: INFO: Got endpoints: latency-svc-jkjcx [742.892359ms] Jan 3 15:00:51.114: INFO: Created: latency-svc-dl76h Jan 3 15:00:51.151: INFO: Got endpoints: latency-svc-hqzc5 [750.832414ms] Jan 3 15:00:51.163: INFO: Created: latency-svc-6765s Jan 3 15:00:51.203: INFO: Got endpoints: latency-svc-bbtnx [745.241418ms] Jan 3 15:00:51.217: INFO: Created: latency-svc-b79lk Jan 3 15:00:51.254: INFO: Got endpoints: latency-svc-bj9vz [754.221118ms] Jan 3 15:00:51.269: INFO: Created: latency-svc-vkmzt Jan 3 15:00:51.312: INFO: Got endpoints: latency-svc-646g4 [761.8937ms] Jan 3 15:00:51.337: INFO: Created: latency-svc-9tmbt Jan 3 15:00:51.351: INFO: Got endpoints: latency-svc-hscv6 [746.906335ms] Jan 3 15:00:51.364: INFO: Created: latency-svc-8jppg Jan 3 15:00:51.401: INFO: Got endpoints: latency-svc-nn7zv [750.111643ms] Jan 3 15:00:51.419: INFO: Created: latency-svc-6h9nw Jan 3 15:00:51.450: INFO: Got endpoints: latency-svc-rf9fx [749.722003ms] Jan 3 15:00:51.463: INFO: Created: latency-svc-h26tk Jan 3 15:00:51.500: INFO: Got endpoints: latency-svc-x68vl [749.199314ms] Jan 3 15:00:51.514: INFO: Created: latency-svc-dd5zb Jan 3 15:00:51.551: INFO: Got endpoints: latency-svc-q4d72 [751.758486ms] Jan 3 15:00:51.562: INFO: Created: latency-svc-nk4wt Jan 3 15:00:51.600: INFO: Got endpoints: latency-svc-87kzg [748.14722ms] Jan 3 15:00:51.614: INFO: Created: latency-svc-24859 Jan 3 15:00:51.653: INFO: Got endpoints: latency-svc-rfthq [752.922069ms] Jan 3 15:00:51.665: INFO: Created: latency-svc-ck866 Jan 3 15:00:51.700: INFO: Got endpoints: latency-svc-qmd6w [749.459501ms] Jan 3 15:00:51.743: INFO: Created: latency-svc-kh2cn Jan 3 15:00:51.755: INFO: Got endpoints: latency-svc-4d8p4 [754.398183ms] Jan 3 15:00:51.768: INFO: Created: latency-svc-czxhv Jan 3 15:00:51.803: INFO: Got endpoints: latency-svc-rp8xg [750.448344ms] Jan 3 15:00:51.817: INFO: Created: latency-svc-nlw52 Jan 3 15:00:51.850: INFO: Got endpoints: latency-svc-dl76h [750.261293ms] Jan 3 15:00:51.865: INFO: Created: latency-svc-prtbc Jan 3 15:00:51.900: INFO: Got endpoints: latency-svc-6765s [748.409753ms] Jan 3 15:00:51.910: INFO: Created: latency-svc-8cb8r Jan 3 15:00:51.950: INFO: Got endpoints: latency-svc-b79lk [746.611591ms] Jan 3 15:00:51.961: INFO: Created: latency-svc-tnx58 Jan 3 15:00:52.000: INFO: Got endpoints: latency-svc-vkmzt [746.167633ms] Jan 3 15:00:52.011: INFO: Created: latency-svc-94hlt Jan 3 15:00:52.058: INFO: Got endpoints: latency-svc-9tmbt [745.838822ms] Jan 3 15:00:52.069: INFO: Created: latency-svc-n45kc Jan 3 15:00:52.101: INFO: Got endpoints: latency-svc-8jppg [749.612473ms] Jan 3 15:00:52.113: INFO: Created: latency-svc-bcz9c Jan 3 15:00:52.149: INFO: Got endpoints: latency-svc-6h9nw [748.737791ms] Jan 3 15:00:52.169: INFO: Created: latency-svc-h28h7 Jan 3 15:00:52.202: INFO: Got endpoints: latency-svc-h26tk [751.683089ms] Jan 3 15:00:52.214: INFO: Created: latency-svc-vjlbg Jan 3 15:00:52.250: INFO: Got endpoints: latency-svc-dd5zb [750.154543ms] Jan 3 15:00:52.275: INFO: Created: latency-svc-dxs47 Jan 3 15:00:52.315: INFO: Got endpoints: latency-svc-nk4wt [763.435517ms] Jan 3 15:00:52.330: INFO: Created: latency-svc-tr42g Jan 3 15:00:52.350: INFO: Got endpoints: latency-svc-24859 [749.724209ms] Jan 3 15:00:52.364: INFO: Created: latency-svc-tdzfw Jan 3 15:00:52.401: INFO: Got endpoints: latency-svc-ck866 [747.279425ms] Jan 3 15:00:52.415: INFO: Created: latency-svc-99wdc Jan 3 15:00:52.449: INFO: Got endpoints: latency-svc-kh2cn [749.442519ms] Jan 3 15:00:52.461: INFO: Created: latency-svc-75cq7 Jan 3 15:00:52.500: INFO: Got endpoints: latency-svc-czxhv [745.182496ms] Jan 3 15:00:52.512: INFO: Created: latency-svc-z4dj6 Jan 3 15:00:52.550: INFO: Got endpoints: latency-svc-nlw52 [747.048938ms] Jan 3 15:00:52.562: INFO: Created: latency-svc-c97s7 Jan 3 15:00:52.602: INFO: Got endpoints: latency-svc-prtbc [751.685586ms] Jan 3 15:00:52.613: INFO: Created: latency-svc-nd8vp Jan 3 15:00:52.650: INFO: Got endpoints: latency-svc-8cb8r [749.999437ms] Jan 3 15:00:52.663: INFO: Created: latency-svc-2725f Jan 3 15:00:52.699: INFO: Got endpoints: latency-svc-tnx58 [749.223964ms] Jan 3 15:00:52.710: INFO: Created: latency-svc-lg9zc Jan 3 15:00:52.750: INFO: Got endpoints: latency-svc-94hlt [750.031953ms] Jan 3 15:00:52.761: INFO: Created: latency-svc-dfj75 Jan 3 15:00:52.805: INFO: Got endpoints: latency-svc-n45kc [746.934231ms] Jan 3 15:00:52.815: INFO: Created: latency-svc-hp9n2 Jan 3 15:00:52.852: INFO: Got endpoints: latency-svc-bcz9c [751.330216ms] Jan 3 15:00:52.863: INFO: Created: latency-svc-krbhl Jan 3 15:00:52.900: INFO: Got endpoints: latency-svc-h28h7 [750.485709ms] Jan 3 15:00:52.917: INFO: Created: latency-svc-7h2j7 Jan 3 15:00:52.951: INFO: Got endpoints: latency-svc-vjlbg [749.122283ms] Jan 3 15:00:52.963: INFO: Created: latency-svc-wcvk9 Jan 3 15:00:53.000: INFO: Got endpoints: latency-svc-dxs47 [750.326485ms] Jan 3 15:00:53.016: INFO: Created: latency-svc-qx6fg Jan 3 15:00:53.052: INFO: Got endpoints: latency-svc-tr42g [737.425917ms] Jan 3 15:00:53.068: INFO: Created: latency-svc-k2x56 Jan 3 15:00:53.100: INFO: Got endpoints: latency-svc-tdzfw [749.28343ms] Jan 3 15:00:53.112: INFO: Created: latency-svc-8fc66 Jan 3 15:00:53.151: INFO: Got endpoints: latency-svc-99wdc [750.792697ms] Jan 3 15:00:53.164: INFO: Created: latency-svc-gb9gw Jan 3 15:00:53.200: INFO: Got endpoints: latency-svc-75cq7 [750.694911ms] Jan 3 15:00:53.213: INFO: Created: latency-svc-v4jm8 Jan 3 15:00:53.250: INFO: Got endpoints: latency-svc-z4dj6 [749.425378ms] Jan 3 15:00:53.266: INFO: Created: latency-svc-fwplt Jan 3 15:00:53.312: INFO: Got endpoints: latency-svc-c97s7 [761.342034ms] Jan 3 15:00:53.340: INFO: Created: latency-svc-ltg5q Jan 3 15:00:53.355: INFO: Got endpoints: latency-svc-nd8vp [752.34344ms] Jan 3 15:00:53.375: INFO: Created: latency-svc-lkntc Jan 3 15:00:53.405: INFO: Got endpoints: latency-svc-2725f [754.608837ms] Jan 3 15:00:53.420: INFO: Created: latency-svc-h5kwm