Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h3m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc001bb6948>: { error: <*errors.withMessage | 0xc0020a8580>{ cause: <*errors.errorString | 0xc00166a8b0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1a97f78, 0x1adc389, 0x7b9691, 0x7b9085, 0x7b875b, 0x7be4c9, 0x7bdeb2, 0x7def91, 0x7decb6, 0x7de305, 0x7e0745, 0x7ec929, 0x7ec73e, 0x1af7c92, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-nit25p INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-nit25p" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-d8uk6o" using the "upgrades-cgroupfs" template (Kubernetes v1.18.20, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-d8uk6o --infrastructure (default) --kubernetes-version v1.18.20 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-d8uk6o-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-d8uk6o-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-nit25p/k8s-upgrade-and-conformance-d8uk6o-8ck6x to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-nit25p/k8s-upgrade-and-conformance-d8uk6o-8ck6x to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.19.16 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-nit25p/k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk to be upgraded to v1.19.16 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.19.16 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-nit25p/k8s-upgrade-and-conformance-d8uk6o-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-nit25p/k8s-upgrade-and-conformance-d8uk6o-mp-0 to be upgraded from v1.18.20 to v1.19.16 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.19.16 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1675176252�[0m - Will randomize all specs Will run �[1m5484�[0m specs Running in parallel across �[1m4�[0m nodes Jan 31 14:44:14.399: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:44:14.402: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 31 14:44:14.418: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 31 14:44:14.480: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:14.480: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:14.481: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:14.481: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:14.481: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:14.481: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 31 14:44:14.481: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:14.481: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:14.481: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:14.481: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:14.481: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:14.481: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:14.481: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:14.481: INFO: Jan 31 14:44:16.501: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:16.501: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:16.501: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:16.501: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:16.501: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:16.501: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 31 14:44:16.501: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:16.501: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:16.501: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:16.501: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:16.501: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:16.501: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:16.501: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:16.501: INFO: Jan 31 14:44:18.498: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:18.498: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:18.498: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:18.498: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:18.498: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:18.498: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 31 14:44:18.498: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:18.498: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:18.498: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:18.498: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:18.498: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:18.498: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:18.498: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:18.498: INFO: Jan 31 14:44:20.499: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:20.500: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:20.500: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:20.500: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:20.500: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:20.500: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 31 14:44:20.500: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:20.500: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:20.500: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:20.500: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:20.500: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:20.500: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:20.500: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:20.500: INFO: Jan 31 14:44:22.504: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:22.504: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:22.504: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:22.504: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:22.504: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:22.504: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 31 14:44:22.504: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:22.504: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:22.504: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:22.504: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:22.504: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:22.504: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:22.504: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:22.504: INFO: Jan 31 14:44:24.502: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:24.502: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:24.502: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:24.502: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:24.502: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:24.502: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 31 14:44:24.502: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:24.502: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:24.502: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:24.502: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:24.502: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:24.502: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:24.502: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:24.502: INFO: Jan 31 14:44:26.501: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:26.501: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:26.501: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:26.502: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:26.502: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:26.502: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 31 14:44:26.502: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:26.502: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:26.502: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:26.502: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:26.502: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:26.502: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:26.502: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:26.502: INFO: Jan 31 14:44:28.504: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:28.504: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:28.504: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:28.504: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:28.504: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:28.504: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 31 14:44:28.504: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:28.504: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:28.504: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:28.504: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:28.504: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:28.504: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:28.504: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:28.504: INFO: Jan 31 14:44:30.500: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:30.500: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:30.500: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:30.500: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:30.500: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:30.500: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 31 14:44:30.500: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:30.500: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:30.500: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:30.500: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:30.500: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:30.500: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:30.500: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:30.500: INFO: Jan 31 14:44:32.502: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:32.502: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:32.502: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:32.502: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:32.502: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:32.502: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 31 14:44:32.502: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:32.502: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:32.502: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:32.502: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:32.502: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:32.502: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:32.502: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:32.502: INFO: Jan 31 14:44:34.499: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:34.499: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:34.499: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:34.499: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:34.499: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:34.499: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 31 14:44:34.499: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:34.499: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:34.499: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:34.499: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:34.499: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:34.499: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:34.499: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:34.499: INFO: Jan 31 14:44:36.503: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:36.503: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:36.503: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:36.503: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:36.503: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:36.503: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 31 14:44:36.503: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:36.503: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:36.503: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:36.503: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:36.503: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:36.503: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:36.503: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:36.503: INFO: Jan 31 14:44:38.500: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:38.500: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:38.500: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:38.500: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:38.500: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:38.500: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 31 14:44:38.500: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:38.500: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:38.500: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:38.500: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:38.500: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:38.500: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:38.500: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:38.500: INFO: Jan 31 14:44:40.499: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:40.499: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:40.499: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:40.499: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:40.499: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:40.499: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 31 14:44:40.499: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:40.499: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:40.499: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:40.499: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:40.499: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:40.499: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:40.499: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:40.499: INFO: Jan 31 14:44:42.498: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:42.498: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:42.498: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:42.498: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:42.498: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:42.498: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Jan 31 14:44:42.498: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:42.498: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:42.498: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:42.498: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:42.498: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:42.498: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:42.498: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:42.498: INFO: Jan 31 14:44:44.499: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:44.499: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:44.499: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:44.499: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:44.499: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:44.499: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Jan 31 14:44:44.499: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:44.499: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:44.499: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:44.499: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:44.499: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:44.499: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:44.499: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:44.499: INFO: Jan 31 14:44:46.500: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:46.501: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:46.501: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:46.501: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:46.501: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:46.501: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) Jan 31 14:44:46.501: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:46.501: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:46.501: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:46.501: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:46.501: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:46.501: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:46.501: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:46.501: INFO: Jan 31 14:44:48.503: INFO: The status of Pod coredns-f9fd979d6-6nvbj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:48.503: INFO: The status of Pod kindnet-b9rkm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:48.503: INFO: The status of Pod kindnet-tsmg6 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:48.503: INFO: The status of Pod kube-proxy-5jhkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:48.503: INFO: The status of Pod kube-proxy-kvwkw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:48.503: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) Jan 31 14:44:48.503: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:48.503: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:48.503: INFO: coredns-f9fd979d6-6nvbj k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:57 +0000 UTC }] Jan 31 14:44:48.503: INFO: kindnet-b9rkm k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:11 +0000 UTC }] Jan 31 14:44:48.503: INFO: kindnet-tsmg6 k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:36:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:35:56 +0000 UTC }] Jan 31 14:44:48.503: INFO: kube-proxy-5jhkx k8s-upgrade-and-conformance-d8uk6o-worker-155fw5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:42:16 +0000 UTC }] Jan 31 14:44:48.503: INFO: kube-proxy-kvwkw k8s-upgrade-and-conformance-d8uk6o-worker-xjohq1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:43:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:41:34 +0000 UTC }] Jan 31 14:44:48.503: INFO: Jan 31 14:44:50.498: INFO: The status of Pod coredns-f9fd979d6-lbxdt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:50.498: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) Jan 31 14:44:50.498: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:50.498: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:50.498: INFO: coredns-f9fd979d6-lbxdt k8s-upgrade-and-conformance-d8uk6o-worker-6xi31i Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC }] Jan 31 14:44:50.498: INFO: Jan 31 14:44:52.498: INFO: The status of Pod coredns-f9fd979d6-lbxdt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:52.499: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) Jan 31 14:44:52.499: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:52.499: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:52.499: INFO: coredns-f9fd979d6-lbxdt k8s-upgrade-and-conformance-d8uk6o-worker-6xi31i Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC }] Jan 31 14:44:52.499: INFO: Jan 31 14:44:54.501: INFO: The status of Pod coredns-f9fd979d6-lbxdt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 31 14:44:54.501: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) Jan 31 14:44:54.501: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 31 14:44:54.501: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:44:54.501: INFO: coredns-f9fd979d6-lbxdt k8s-upgrade-and-conformance-d8uk6o-worker-6xi31i Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:44:49 +0000 UTC }] Jan 31 14:44:54.501: INFO: Jan 31 14:44:56.498: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) Jan 31 14:44:56.498: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 31 14:44:56.498: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 31 14:44:56.508: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 31 14:44:56.508: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 31 14:44:56.508: INFO: e2e test version: v1.19.16 Jan 31 14:44:56.509: INFO: kube-apiserver version: v1.19.16 Jan 31 14:44:56.510: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:44:56.515: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 31 14:44:56.535: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:44:56.551: INFO: Cluster IP family: ipv4 Jan 31 14:44:56.535: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:44:56.552: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 31 14:44:56.535: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:44:56.556: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:44:56.529: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir Jan 31 14:44:56.572: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 31 14:44:56.587: INFO: Waiting up to 5m0s for pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534" in namespace "emptydir-3670" to be "Succeeded or Failed" Jan 31 14:44:56.590: INFO: Pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534": Phase="Pending", Reason="", readiness=false. Elapsed: 3.114171ms Jan 31 14:44:58.596: INFO: Pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009868759s Jan 31 14:45:00.778: INFO: Pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191300035s Jan 31 14:45:02.782: INFO: Pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.195299651s �[1mSTEP�[0m: Saw pod success Jan 31 14:45:02.782: INFO: Pod "pod-6a90fee5-3255-482b-b7e2-6c62cace2534" satisfied condition "Succeeded or Failed" Jan 31 14:45:02.785: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-6a90fee5-3255-482b-b7e2-6c62cace2534 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:45:02.813: INFO: Waiting for pod pod-6a90fee5-3255-482b-b7e2-6c62cace2534 to disappear Jan 31 14:45:02.817: INFO: Pod pod-6a90fee5-3255-482b-b7e2-6c62cace2534 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:45:02.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3670" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:45:02.831: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating Agnhost RC Jan 31 14:45:02.918: INFO: namespace kubectl-2818 Jan 31 14:45:02.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2818 create -f -' Jan 31 14:45:03.507: INFO: stderr: "" Jan 31 14:45:03.507: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 31 14:45:04.513: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 14:45:04.513: INFO: Found 0 / 1 Jan 31 14:45:05.512: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 14:45:05.512: INFO: Found 1 / 1 Jan 31 14:45:05.512: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 31 14:45:05.515: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 14:45:05.515: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 14:45:05.515: INFO: wait on agnhost-primary startup in kubectl-2818 Jan 31 14:45:05.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2818 logs agnhost-primary-tqchv agnhost-primary' Jan 31 14:45:05.655: INFO: stderr: "" Jan 31 14:45:05.655: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 31 14:45:05.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2818 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 31 14:45:05.822: INFO: stderr: "" Jan 31 14:45:05.822: INFO: stdout: "service/rm2 exposed\n" Jan 31 14:45:05.827: INFO: Service rm2 in namespace kubectl-2818 found. �[1mSTEP�[0m: exposing service Jan 31 14:45:07.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2818 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 31 14:45:07.986: INFO: stderr: "" Jan 31 14:45:07.986: INFO: stdout: "service/rm3 exposed\n" Jan 31 14:45:07.993: INFO: Service rm3 in namespace kubectl-2818 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:45:09.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:44:56.563: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi Jan 31 14:44:56.600: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: set up a multi version CRD Jan 31 14:44:56.604: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:45:14.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2771" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:45:14.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:45:14.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 14:45:16.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:45:18.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773114, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:45:21.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a mutating webhook configuration Jan 31 14:45:31.637: INFO: Waiting for webhook configuration to be ready... Jan 31 14:45:41.747: INFO: Waiting for webhook configuration to be ready... Jan 31 14:45:51.861: INFO: Waiting for webhook configuration to be ready... Jan 31 14:46:01.951: INFO: Waiting for webhook configuration to be ready... Jan 31 14:46:11.969: INFO: Waiting for webhook configuration to be ready... Jan 31 14:46:11.970: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func22.17() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527 +0x407 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002db9e00, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:11.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-215" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-215-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [57.947 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mpatching/updating a mutating webhook should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 14:46:11.970: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":1,"skipped":6,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:12.118: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:46:12.903: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:46:15.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:16.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8984" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8984-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":2,"skipped":6,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:16.345: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Create set of pod templates Jan 31 14:46:16.434: INFO: created test-podtemplate-1 Jan 31 14:46:16.444: INFO: created test-podtemplate-2 Jan 31 14:46:16.453: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 31 14:46:16.460: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 31 14:46:16.493: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:16.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-5606" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":3,"skipped":15,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:16.613: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:46:16.681: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d2ee195e-1087-4e5b-9f87-4c1b215e50cf" in namespace "security-context-test-8373" to be "Succeeded or Failed" Jan 31 14:46:16.686: INFO: Pod "alpine-nnp-false-d2ee195e-1087-4e5b-9f87-4c1b215e50cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.538089ms Jan 31 14:46:18.693: INFO: Pod "alpine-nnp-false-d2ee195e-1087-4e5b-9f87-4c1b215e50cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01225846s Jan 31 14:46:20.700: INFO: Pod "alpine-nnp-false-d2ee195e-1087-4e5b-9f87-4c1b215e50cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01890994s Jan 31 14:46:20.700: INFO: Pod "alpine-nnp-false-d2ee195e-1087-4e5b-9f87-4c1b215e50cf" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:20.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-8373" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:20.798: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-e9b1955d-3864-4cad-9786-192b63ded28a �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:46:20.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c" in namespace "configmap-7961" to be "Succeeded or Failed" Jan 31 14:46:20.850: INFO: Pod "pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974272ms Jan 31 14:46:22.855: INFO: Pod "pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00816355s �[1mSTEP�[0m: Saw pod success Jan 31 14:46:22.855: INFO: Pod "pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c" satisfied condition "Succeeded or Failed" Jan 31 14:46:22.859: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:46:22.890: INFO: Waiting for pod pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c to disappear Jan 31 14:46:22.893: INFO: Pod pod-configmaps-90f7b8df-ed79-49c3-9521-f84d8d782e5c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:22.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7961" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":88,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:22.931: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Pod that fits quota �[1mSTEP�[0m: Ensuring ResourceQuota status captures the pod usage �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) �[1mSTEP�[0m: Ensuring a pod cannot update its resource requirements �[1mSTEP�[0m: Ensuring attempts to update pod resource requirements did not change quota usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:36.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3986" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":95,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:36.100: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-2a270e7d-393c-4496-8606-66913d4f7bbc �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:46:36.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d" in namespace "configmap-9301" to be "Succeeded or Failed" Jan 31 14:46:36.171: INFO: Pod "pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.925489ms Jan 31 14:46:38.176: INFO: Pod "pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010563485s �[1mSTEP�[0m: Saw pod success Jan 31 14:46:38.176: INFO: Pod "pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d" satisfied condition "Succeeded or Failed" Jan 31 14:46:38.184: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:46:38.214: INFO: Waiting for pod pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d to disappear Jan 31 14:46:38.223: INFO: Pod pod-configmaps-10734a0e-11c0-4686-bc60-c3839159c44d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:38.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9301" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":102,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:38.514: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:49.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7701" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":191,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:49.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: getting the auto-created API token Jan 31 14:46:50.233: INFO: created pod pod-service-account-defaultsa Jan 31 14:46:50.233: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 31 14:46:50.239: INFO: created pod pod-service-account-mountsa Jan 31 14:46:50.239: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 31 14:46:50.257: INFO: created pod pod-service-account-nomountsa Jan 31 14:46:50.257: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 31 14:46:50.273: INFO: created pod pod-service-account-defaultsa-mountspec Jan 31 14:46:50.273: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 31 14:46:50.291: INFO: created pod pod-service-account-mountsa-mountspec Jan 31 14:46:50.291: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 31 14:46:50.306: INFO: created pod pod-service-account-nomountsa-mountspec Jan 31 14:46:50.306: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 31 14:46:50.312: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 31 14:46:50.313: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 31 14:46:50.331: INFO: created pod pod-service-account-mountsa-nomountspec Jan 31 14:46:50.331: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 31 14:46:50.354: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 31 14:46:50.354: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:46:50.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7050" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":9,"skipped":211,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:46:50.403: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4713 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 31 14:46:50.492: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 14:46:50.627: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 14:46:52.635: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 14:46:54.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 14:46:56.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 14:46:58.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:00.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:02.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:04.635: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:06.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:08.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:10.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:12.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:14.640: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 14:47:16.632: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 14:47:16.641: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 31 14:47:16.648: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 31 14:47:16.657: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 31 14:47:18.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.14:8080/dial?request=hostname&protocol=udp&host=192.168.0.7&port=8081&tries=1'] Namespace:pod-network-test-4713 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 14:47:18.691: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:47:18.831: INFO: Waiting for responses: map[] Jan 31 14:47:18.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.14:8080/dial?request=hostname&protocol=udp&host=192.168.1.5&port=8081&tries=1'] Namespace:pod-network-test-4713 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 14:47:18.837: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:47:18.957: INFO: Waiting for responses: map[] Jan 31 14:47:18.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.14:8080/dial?request=hostname&protocol=udp&host=192.168.2.6&port=8081&tries=1'] Namespace:pod-network-test-4713 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 14:47:18.961: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:47:19.083: INFO: Waiting for responses: map[] Jan 31 14:47:19.088: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.14:8080/dial?request=hostname&protocol=udp&host=192.168.6.13&port=8081&tries=1'] Namespace:pod-network-test-4713 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 14:47:19.088: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:47:19.213: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:47:19.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4713" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":212,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:47:19.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-d6514054-7ebf-4224-878a-d67d5efcf397 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 14:47:19.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2" in namespace "projected-2973" to be "Succeeded or Failed" Jan 31 14:47:19.372: INFO: Pod "pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.614992ms Jan 31 14:47:21.379: INFO: Pod "pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010858658s �[1mSTEP�[0m: Saw pod success Jan 31 14:47:21.379: INFO: Pod "pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2" satisfied condition "Succeeded or Failed" Jan 31 14:47:21.383: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:47:21.407: INFO: Waiting for pod pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2 to disappear Jan 31 14:47:21.413: INFO: Pod pod-projected-secrets-39e330c5-b090-4665-a6cc-cf7be0d3e5b2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:47:21.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2973" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":244,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:44:56.641: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion Jan 31 14:44:56.709: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 31 14:46:57.259: INFO: Successfully updated pod "var-expansion-94fce0db-dcea-4a4a-be0c-09a25d677b8f" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 31 14:46:59.269: INFO: Deleting pod "var-expansion-94fce0db-dcea-4a4a-be0c-09a25d677b8f" in namespace "var-expansion-6605" Jan 31 14:46:59.283: INFO: Wait up to 5m0s for pod "var-expansion-94fce0db-dcea-4a4a-be0c-09a25d677b8f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:47:35.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-6605" for this suite. �[32m• [SLOW TEST:158.667 seconds]�[0m [k8s.io] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":1,"skipped":45,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:47:35.513: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:47:35.575: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 31 14:47:40.594: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 31 14:47:40.594: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 31 14:47:42.601: INFO: Creating deployment "test-rollover-deployment" Jan 31 14:47:42.613: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 31 14:47:44.625: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 31 14:47:44.635: INFO: Ensure that both replica sets have 1 created replica Jan 31 14:47:44.645: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 31 14:47:44.656: INFO: Updating deployment test-rollover-deployment Jan 31 14:47:44.656: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 31 14:47:46.666: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 31 14:47:46.678: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 31 14:47:46.690: INFO: all replica sets need to contain the pod-template-hash label Jan 31 14:47:46.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773265, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:47:48.702: INFO: all replica sets need to contain the pod-template-hash label Jan 31 14:47:48.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773265, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:47:50.702: INFO: all replica sets need to contain the pod-template-hash label Jan 31 14:47:50.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773265, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:47:52.700: INFO: all replica sets need to contain the pod-template-hash label Jan 31 14:47:52.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773265, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:47:54.703: INFO: all replica sets need to contain the pod-template-hash label Jan 31 14:47:54.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773265, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810773262, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 14:47:56.703: INFO: Jan 31 14:47:56.704: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Jan 31 14:47:56.722: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2263 /apis/apps/v1/namespaces/deployment-2263/deployments/test-rollover-deployment ff041136-170a-4103-95b8-645c08dc5856 4355 2 2023-01-31 14:47:42 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-31 14:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-31 14:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b5f668 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-31 14:47:42 +0000 UTC,LastTransitionTime:2023-01-31 14:47:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2023-01-31 14:47:55 +0000 UTC,LastTransitionTime:2023-01-31 14:47:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 31 14:47:56.729: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-2263 /apis/apps/v1/namespaces/deployment-2263/replicasets/test-rollover-deployment-5797c7764 551d03f3-0dd7-4312-a3d9-394e2506eb7e 4344 2 2023-01-31 14:47:44 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ff041136-170a-4103-95b8-645c08dc5856 0xc002b5fb60 0xc002b5fb61}] [] [{kube-controller-manager Update apps/v1 2023-01-31 14:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff041136-170a-4103-95b8-645c08dc5856\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b5fbd8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 14:47:56.729: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 31 14:47:56.729: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2263 /apis/apps/v1/namespaces/deployment-2263/replicasets/test-rollover-controller 7047a3b2-8736-4236-bc31-e9190ddf57cb 4354 2 2023-01-31 14:47:35 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ff041136-170a-4103-95b8-645c08dc5856 0xc002b5fa5f 0xc002b5fa70}] [] [{e2e.test Update apps/v1 2023-01-31 14:47:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-31 14:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff041136-170a-4103-95b8-645c08dc5856\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b5fb08 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 14:47:56.729: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-2263 /apis/apps/v1/namespaces/deployment-2263/replicasets/test-rollover-deployment-78bc8b888c 504e6cd7-7393-4907-9721-d219c44d2606 4299 2 2023-01-31 14:47:42 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ff041136-170a-4103-95b8-645c08dc5856 0xc002b5fc37 0xc002b5fc38}] [] [{kube-controller-manager Update apps/v1 2023-01-31 14:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff041136-170a-4103-95b8-645c08dc5856\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b5fcc8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 14:47:56.736: INFO: Pod "test-rollover-deployment-5797c7764-54p7h" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-54p7h test-rollover-deployment-5797c7764- deployment-2263 /api/v1/namespaces/deployment-2263/pods/test-rollover-deployment-5797c7764-54p7h 7002865e-48dc-43b4-8526-09458a093071 4313 0 2023-01-31 14:47:44 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 551d03f3-0dd7-4312-a3d9-394e2506eb7e 0xc002ee8430 0xc002ee8431}] [] [{kube-controller-manager Update v1 2023-01-31 14:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551d03f3-0dd7-4312-a3d9-394e2506eb7e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-31 14:47:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6w2w5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6w2w5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6w2w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-d8uk6o-worker-z043bi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:47:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.19,StartTime:2023-01-31 14:47:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-31 14:47:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://b74669411bc5d5c11011fdaa92bdf2225bf547a3b2a7e66d1101673cab284752,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:47:56.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2263" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":2,"skipped":132,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:47:56.778: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:00.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-9361" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":135,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:00.933: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating the pod Jan 31 14:48:03.544: INFO: Successfully updated pod "labelsupdate7bea2701-695b-4f3b-ad4c-1f523a7d3367" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:07.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6594" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":144,"failed":0} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:07.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-28lg �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 31 14:48:07.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-28lg" in namespace "subpath-6475" to be "Succeeded or Failed" Jan 31 14:48:07.663: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249782ms Jan 31 14:48:09.668: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 2.00997039s Jan 31 14:48:11.675: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 4.017213316s Jan 31 14:48:13.685: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 6.026649903s Jan 31 14:48:15.712: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 8.053657825s Jan 31 14:48:17.725: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 10.066575762s Jan 31 14:48:19.730: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 12.071782113s Jan 31 14:48:21.737: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 14.078298868s Jan 31 14:48:23.742: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 16.083884036s Jan 31 14:48:25.748: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 18.089388017s Jan 31 14:48:27.753: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 20.09506222s Jan 31 14:48:29.760: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Running", Reason="", readiness=true. Elapsed: 22.101909516s Jan 31 14:48:31.766: INFO: Pod "pod-subpath-test-downwardapi-28lg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108158121s �[1mSTEP�[0m: Saw pod success Jan 31 14:48:31.767: INFO: Pod "pod-subpath-test-downwardapi-28lg" satisfied condition "Succeeded or Failed" Jan 31 14:48:31.771: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-subpath-test-downwardapi-28lg container test-container-subpath-downwardapi-28lg: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:48:31.797: INFO: Waiting for pod pod-subpath-test-downwardapi-28lg to disappear Jan 31 14:48:31.801: INFO: Pod pod-subpath-test-downwardapi-28lg no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-28lg Jan 31 14:48:31.801: INFO: Deleting pod "pod-subpath-test-downwardapi-28lg" in namespace "subpath-6475" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:31.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-6475" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":144,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:31.849: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:48:32.232: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:48:35.274: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7207" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7207-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":6,"skipped":156,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:47.563: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-e34fbd3d-c439-49f7-ac39-225b759c1d92 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:48:47.648: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152" in namespace "projected-6305" to be "Succeeded or Failed" Jan 31 14:48:47.653: INFO: Pod "pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350158ms Jan 31 14:48:49.659: INFO: Pod "pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010221532s �[1mSTEP�[0m: Saw pod success Jan 31 14:48:49.659: INFO: Pod "pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152" satisfied condition "Succeeded or Failed" Jan 31 14:48:49.663: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:48:49.692: INFO: Waiting for pod pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152 to disappear Jan 31 14:48:49.696: INFO: Pod pod-projected-configmaps-7fd9f2db-5ae8-4255-b033-1354e1ce4152 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:49.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":159,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:49.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 31 14:48:50.594: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 31 14:48:50.613: INFO: waiting for watch events with expected annotations Jan 31 14:48:50.613: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-5187" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":8,"skipped":166,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:50.725: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 14:48:50.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df" in namespace "downward-api-9471" to be "Succeeded or Failed" Jan 31 14:48:50.828: INFO: Pod "downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.701313ms Jan 31 14:48:52.833: INFO: Pod "downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009263529s �[1mSTEP�[0m: Saw pod success Jan 31 14:48:52.833: INFO: Pod "downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df" satisfied condition "Succeeded or Failed" Jan 31 14:48:52.838: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:48:52.863: INFO: Waiting for pod downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df to disappear Jan 31 14:48:52.869: INFO: Pod downwardapi-volume-6acbfffd-1ca7-497c-8b82-0f257caa58df no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:52.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9471" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":168,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:52.969: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 31 14:48:53.010: INFO: Waiting up to 5m0s for pod "pod-60e1e563-76db-4daa-a2fc-538a08657a97" in namespace "emptydir-7693" to be "Succeeded or Failed" Jan 31 14:48:53.014: INFO: Pod "pod-60e1e563-76db-4daa-a2fc-538a08657a97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992198ms Jan 31 14:48:55.019: INFO: Pod "pod-60e1e563-76db-4daa-a2fc-538a08657a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00838968s �[1mSTEP�[0m: Saw pod success Jan 31 14:48:55.019: INFO: Pod "pod-60e1e563-76db-4daa-a2fc-538a08657a97" satisfied condition "Succeeded or Failed" Jan 31 14:48:55.023: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod pod-60e1e563-76db-4daa-a2fc-538a08657a97 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:48:55.061: INFO: Waiting for pod pod-60e1e563-76db-4daa-a2fc-538a08657a97 to disappear Jan 31 14:48:55.066: INFO: Pod pod-60e1e563-76db-4daa-a2fc-538a08657a97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:55.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7693" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":200,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:55.085: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-1eee3838-3249-42d4-b4a7-433a0e9503d2 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:48:55.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2" in namespace "configmap-5365" to be "Succeeded or Failed" Jan 31 14:48:55.149: INFO: Pod "pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447874ms Jan 31 14:48:57.158: INFO: Pod "pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01174863s �[1mSTEP�[0m: Saw pod success Jan 31 14:48:57.158: INFO: Pod "pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2" satisfied condition "Succeeded or Failed" Jan 31 14:48:57.163: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2 container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:48:57.187: INFO: Waiting for pod pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2 to disappear Jan 31 14:48:57.192: INFO: Pod pod-configmaps-653f5e8d-6a6d-48f1-8e19-ef746c7662d2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:48:57.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":203,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:48:57.288: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1546 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 14:48:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 31 14:48:57.533: INFO: stderr: "" Jan 31 14:48:57.533: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 31 14:49:02.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 get pod e2e-test-httpd-pod -o json' Jan 31 14:49:02.761: INFO: stderr: "" Jan 31 14:49:02.761: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-31T14:48:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-31T14:48:57Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"192.168.0.11\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-31T14:48:58Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1826\",\n \"resourceVersion\": \"4840\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1826/pods/e2e-test-httpd-pod\",\n \"uid\": \"b85c3549-3b2f-472b-af93-b3be77931b1f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-69hq5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-69hq5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-69hq5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-31T14:48:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-31T14:48:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-31T14:48:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-31T14:48:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://1fcb5010cc1a1fd7c08bb1f2587a60dc37df335c5f30a875de0d6524e991bea0\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-31T14:48:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.0.11\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.0.11\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-31T14:48:57Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 31 14:49:02.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 replace -f -' Jan 31 14:49:03.266: INFO: stderr: "" Jan 31 14:49:03.266: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Jan 31 14:49:03.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 delete pods e2e-test-httpd-pod' Jan 31 14:49:04.904: INFO: stderr: "" Jan 31 14:49:04.904: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:04.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1826" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":12,"skipped":229,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:04.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-1300 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-1300 I0131 14:49:05.088807 14 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1300, replica count: 2 I0131 14:49:08.140033 14 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 14:49:08.140: INFO: Creating new exec pod Jan 31 14:49:11.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1300 exec execpodkzsb9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 31 14:49:11.490: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 31 14:49:11.491: INFO: stdout: "" Jan 31 14:49:11.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1300 exec execpodkzsb9 -- /bin/sh -x -c nc -zv -t -w 2 10.133.175.84 80' Jan 31 14:49:11.806: INFO: stderr: "+ nc -zv -t -w 2 10.133.175.84 80\nConnection to 10.133.175.84 80 port [tcp/http] succeeded!\n" Jan 31 14:49:11.806: INFO: stdout: "" Jan 31 14:49:11.806: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:11.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1300" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":13,"skipped":241,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:12.204: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-3c315ba0-2bb9-4926-8fb2-025f113d1595 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-0e70c367-f18c-4ea9-9d54-b9748f51e745 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting secret s-test-opt-del-3c315ba0-2bb9-4926-8fb2-025f113d1595 �[1mSTEP�[0m: Updating secret s-test-opt-upd-0e70c367-f18c-4ea9-9d54-b9748f51e745 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-336127f7-1074-48de-9a88-21345b4e1d40 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:16.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2299" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":347,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:16.461: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test substitution in container's args Jan 31 14:49:16.513: INFO: Waiting up to 5m0s for pod "var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9" in namespace "var-expansion-9862" to be "Succeeded or Failed" Jan 31 14:49:16.518: INFO: Pod "var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.645433ms Jan 31 14:49:18.528: INFO: Pod "var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013738253s Jan 31 14:49:20.534: INFO: Pod "var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020279025s �[1mSTEP�[0m: Saw pod success Jan 31 14:49:20.535: INFO: Pod "var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9" satisfied condition "Succeeded or Failed" Jan 31 14:49:20.539: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p pod var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:49:20.578: INFO: Waiting for pod var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9 to disappear Jan 31 14:49:20.584: INFO: Pod var-expansion-44b848ec-9e92-4b6f-a4f6-4abc4e06d9e9 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:20.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9862" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":348,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:20.638: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-f71c265e-1fd5-40fe-a893-734790e927c1 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:49:20.727: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8" in namespace "projected-6554" to be "Succeeded or Failed" Jan 31 14:49:20.732: INFO: Pod "pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116388ms Jan 31 14:49:22.738: INFO: Pod "pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010507923s �[1mSTEP�[0m: Saw pod success Jan 31 14:49:22.738: INFO: Pod "pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8" satisfied condition "Succeeded or Failed" Jan 31 14:49:22.742: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:49:22.766: INFO: Waiting for pod pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8 to disappear Jan 31 14:49:22.772: INFO: Pod pod-projected-configmaps-79d160a5-831b-427b-941a-f88ba8df45e8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:22.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6554" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":365,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:22.852: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:49:23.708: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:49:26.737: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:49:26.742: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-6025-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:49:27.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-43" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-43-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":17,"skipped":394,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:45:10.051: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-1454 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-1454 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-1454 Jan 31 14:45:10.110: INFO: Found 0 stateful pods, waiting for 1 Jan 31 14:45:20.115: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 31 14:45:20.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 14:45:20.287: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 14:45:20.287: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 14:45:20.287: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 14:45:20.291: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 31 14:45:30.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 14:45:30.295: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 14:45:30.308: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:45:30.308: INFO: ss-0 k8s-upgrade-and-conformance-d8uk6o-worker-z043bi Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC }] Jan 31 14:45:30.308: INFO: Jan 31 14:45:30.308: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 31 14:45:31.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996367404s Jan 31 14:45:32.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991856941s Jan 31 14:45:33.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987712333s Jan 31 14:45:34.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983253565s Jan 31 14:45:35.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979223286s Jan 31 14:45:36.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974315007s Jan 31 14:45:37.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970605843s Jan 31 14:45:38.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966868522s Jan 31 14:45:39.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.237192ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1454 Jan 31 14:45:40.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:45:40.525: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 31 14:45:40.525: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 14:45:40.525: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 14:45:40.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:45:40.691: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 31 14:45:40.691: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 14:45:40.691: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 14:45:40.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:45:40.874: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 31 14:45:40.874: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 14:45:40.874: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 14:45:40.878: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 31 14:45:50.884: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 14:45:50.884: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 14:45:50.884: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 31 14:45:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 14:45:51.208: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 14:45:51.208: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 14:45:51.208: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 14:45:51.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 14:45:51.590: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 14:45:51.590: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 14:45:51.590: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 14:45:51.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 14:45:51.979: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 14:45:51.979: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 14:45:51.979: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 14:45:51.979: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 14:45:51.985: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 31 14:46:01.998: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 14:46:01.998: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 31 14:46:01.998: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 31 14:46:02.015: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:02.015: INFO: ss-0 k8s-upgrade-and-conformance-d8uk6o-worker-z043bi Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC }] Jan 31 14:46:02.015: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:02.015: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:02.015: INFO: Jan 31 14:46:02.015: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 14:46:03.024: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:03.025: INFO: ss-0 k8s-upgrade-and-conformance-d8uk6o-worker-z043bi Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC }] Jan 31 14:46:03.025: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:03.025: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:03.025: INFO: Jan 31 14:46:03.025: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 14:46:04.032: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:04.032: INFO: ss-0 k8s-upgrade-and-conformance-d8uk6o-worker-z043bi Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:10 +0000 UTC }] Jan 31 14:46:04.032: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:04.032: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:04.032: INFO: Jan 31 14:46:04.032: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 14:46:05.038: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:05.038: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:05.038: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:05.038: INFO: Jan 31 14:46:05.038: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:06.045: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:06.046: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:06.046: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:06.046: INFO: Jan 31 14:46:06.046: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:07.053: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:07.053: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:07.053: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:07.053: INFO: Jan 31 14:46:07.053: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:08.061: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:08.061: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:08.061: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:08.061: INFO: Jan 31 14:46:08.061: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:09.068: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:09.068: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:09.068: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:09.068: INFO: Jan 31 14:46:09.068: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:10.076: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:10.076: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:10.076: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:10.076: INFO: Jan 31 14:46:10.076: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 31 14:46:11.082: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 14:46:11.083: INFO: ss-1 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:11.083: INFO: ss-2 k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 14:45:30 +0000 UTC }] Jan 31 14:46:11.083: INFO: Jan 31 14:46:11.083: INFO: StatefulSet ss has not reached scale 0, at 2 �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1454 Jan 31 14:46:12.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:46:12.375: INFO: rc: 1 Jan 31 14:46:12.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 31 14:46:22.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:46:22.561: INFO: rc: 1 Jan 31 14:46:22.561: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:46:32.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:46:32.747: INFO: rc: 1 Jan 31 14:46:32.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:46:42.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:46:42.913: INFO: rc: 1 Jan 31 14:46:42.914: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:46:52.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:46:53.123: INFO: rc: 1 Jan 31 14:46:53.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:03.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:03.297: INFO: rc: 1 Jan 31 14:47:03.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:13.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:13.475: INFO: rc: 1 Jan 31 14:47:13.475: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:23.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:23.734: INFO: rc: 1 Jan 31 14:47:23.735: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:33.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:33.997: INFO: rc: 1 Jan 31 14:47:33.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:43.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:44.196: INFO: rc: 1 Jan 31 14:47:44.196: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:47:54.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:47:54.429: INFO: rc: 1 Jan 31 14:47:54.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:04.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:04.634: INFO: rc: 1 Jan 31 14:48:04.634: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:14.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:14.896: INFO: rc: 1 Jan 31 14:48:14.896: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:24.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:25.101: INFO: rc: 1 Jan 31 14:48:25.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:35.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:35.281: INFO: rc: 1 Jan 31 14:48:35.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:45.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:45.466: INFO: rc: 1 Jan 31 14:48:45.467: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:48:55.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:48:55.660: INFO: rc: 1 Jan 31 14:48:55.660: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:05.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:05.856: INFO: rc: 1 Jan 31 14:49:05.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:15.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:16.045: INFO: rc: 1 Jan 31 14:49:16.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:26.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:26.238: INFO: rc: 1 Jan 31 14:49:26.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:36.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:36.432: INFO: rc: 1 Jan 31 14:49:36.432: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:46.657: INFO: rc: 1 Jan 31 14:49:46.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:49:56.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:49:56.861: INFO: rc: 1 Jan 31 14:49:56.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:06.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:07.050: INFO: rc: 1 Jan 31 14:50:07.050: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:17.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:17.260: INFO: rc: 1 Jan 31 14:50:17.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:27.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:27.453: INFO: rc: 1 Jan 31 14:50:27.453: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:37.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:37.640: INFO: rc: 1 Jan 31 14:50:37.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:47.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:47.813: INFO: rc: 1 Jan 31 14:50:47.813: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:50:57.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:50:58.001: INFO: rc: 1 Jan 31 14:50:58.002: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:51:08.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:51:08.193: INFO: rc: 1 Jan 31 14:51:08.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jan 31 14:51:18.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1454 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:51:18.388: INFO: rc: 1 Jan 31 14:51:18.388: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Jan 31 14:51:18.388: INFO: Scaling statefulset ss to 0 Jan 31 14:51:18.423: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 14:51:18.430: INFO: Deleting all statefulset in ns statefulset-1454 Jan 31 14:51:18.436: INFO: Scaling statefulset ss to 0 Jan 31 14:51:18.454: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 14:51:18.459: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:18.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-1454" for this suite. �[32m• [SLOW TEST:368.456 seconds]�[0m [sig-apps] StatefulSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:18.623: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:51:18.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 31 14:51:23.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2805 --namespace=crd-publish-openapi-2805 create -f -' Jan 31 14:51:24.043: INFO: stderr: "" Jan 31 14:51:24.043: INFO: stdout: "e2e-test-crd-publish-openapi-3221-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 31 14:51:24.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2805 --namespace=crd-publish-openapi-2805 delete e2e-test-crd-publish-openapi-3221-crds test-cr' Jan 31 14:51:24.295: INFO: stderr: "" Jan 31 14:51:24.295: INFO: stdout: "e2e-test-crd-publish-openapi-3221-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 31 14:51:24.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2805 --namespace=crd-publish-openapi-2805 apply -f -' Jan 31 14:51:25.302: INFO: stderr: "" Jan 31 14:51:25.303: INFO: stdout: "e2e-test-crd-publish-openapi-3221-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 31 14:51:25.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2805 --namespace=crd-publish-openapi-2805 delete e2e-test-crd-publish-openapi-3221-crds test-cr' Jan 31 14:51:25.496: INFO: stderr: "" Jan 31 14:51:25.496: INFO: stdout: "e2e-test-crd-publish-openapi-3221-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 31 14:51:25.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2805 explain e2e-test-crd-publish-openapi-3221-crds' Jan 31 14:51:26.007: INFO: stderr: "" Jan 31 14:51:26.007: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3221-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:30.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2805" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":4,"skipped":93,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:30.062: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 31 14:51:34.618: INFO: Successfully updated pod "pod-update-activedeadlineseconds-da8b87b3-118f-4422-8aee-b61396cdf177" Jan 31 14:51:34.618: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-da8b87b3-118f-4422-8aee-b61396cdf177" in namespace "pods-8445" to be "terminated due to deadline exceeded" Jan 31 14:51:34.622: INFO: Pod "pod-update-activedeadlineseconds-da8b87b3-118f-4422-8aee-b61396cdf177": Phase="Running", Reason="", readiness=true. Elapsed: 3.78638ms Jan 31 14:51:36.625: INFO: Pod "pod-update-activedeadlineseconds-da8b87b3-118f-4422-8aee-b61396cdf177": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007644368s Jan 31 14:51:36.625: INFO: Pod "pod-update-activedeadlineseconds-da8b87b3-118f-4422-8aee-b61396cdf177" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:36.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8445" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":109,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:36.635: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name secret-test-8d5a25fb-bc85-437e-bb31-ad45a84eaf51 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 14:51:36.696: INFO: Waiting up to 5m0s for pod "pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9" in namespace "secrets-9777" to be "Succeeded or Failed" Jan 31 14:51:36.699: INFO: Pod "pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473395ms Jan 31 14:51:38.703: INFO: Pod "pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006538825s �[1mSTEP�[0m: Saw pod success Jan 31 14:51:38.703: INFO: Pod "pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9" satisfied condition "Succeeded or Failed" Jan 31 14:51:38.706: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:51:38.727: INFO: Waiting for pod pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9 to disappear Jan 31 14:51:38.731: INFO: Pod pod-secrets-7e1276b7-1440-4249-907b-32d1a6deecb9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:38.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9777" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-9908" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":109,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:38.766: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1512 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 14:51:38.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-134 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 31 14:51:38.906: INFO: stderr: "" Jan 31 14:51:38.906: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 Jan 31 14:51:38.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-134 delete pods e2e-test-httpd-pod' Jan 31 14:51:44.599: INFO: stderr: "" Jan 31 14:51:44.599: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:44.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-134" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":7,"skipped":127,"failed":0} [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:44.609: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:48.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-4696" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:48.675: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 14:51:48.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a" in namespace "projected-3185" to be "Succeeded or Failed" Jan 31 14:51:48.713: INFO: Pod "downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.898626ms Jan 31 14:51:50.718: INFO: Pod "downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007652312s �[1mSTEP�[0m: Saw pod success Jan 31 14:51:50.718: INFO: Pod "downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a" satisfied condition "Succeeded or Failed" Jan 31 14:51:50.721: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:51:50.736: INFO: Waiting for pod downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a to disappear Jan 31 14:51:50.739: INFO: Pod downwardapi-volume-82769c51-0c47-4f86-ab45-14670a92a50a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:50.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3185" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":131,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:50.771: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-4a9f0e03-e263-44fb-be60-079883d04406 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:51:50.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35" in namespace "projected-3485" to be "Succeeded or Failed" Jan 31 14:51:50.809: INFO: Pod "pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282136ms Jan 31 14:51:52.812: INFO: Pod "pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005860836s �[1mSTEP�[0m: Saw pod success Jan 31 14:51:52.812: INFO: Pod "pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35" satisfied condition "Succeeded or Failed" Jan 31 14:51:52.815: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:51:52.831: INFO: Waiting for pod pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35 to disappear Jan 31 14:51:52.833: INFO: Pod pod-projected-configmaps-07f6b30d-8fc0-472a-9f1f-758185546e35 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:52.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3485" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":150,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:52.857: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:51:53.326: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:51:56.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:51:56.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1629" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1629-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":163,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:51:56.576: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 31 14:52:00.646: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:00.649: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:02.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:02.653: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:04.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:04.653: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:06.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:06.653: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:08.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:08.653: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:10.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:10.654: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:12.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:12.653: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 14:52:14.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 14:52:14.653: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:52:14.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-4707" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":167,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:52:14.704: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: validating api versions Jan 31 14:52:14.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8540 api-versions' Jan 31 14:52:14.850: INFO: stderr: "" Jan 31 14:52:14.850: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:52:14.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8540" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":13,"skipped":186,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:49:28.057: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod test-webserver-97aa5f44-7e6e-41a9-b735-00aa630f825b in namespace container-probe-3052 Jan 31 14:49:30.131: INFO: Started pod test-webserver-97aa5f44-7e6e-41a9-b735-00aa630f825b in namespace container-probe-3052 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 31 14:49:30.136: INFO: Initial restart count of pod test-webserver-97aa5f44-7e6e-41a9-b735-00aa630f825b is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:53:30.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3052" for this suite. �[32m• [SLOW TEST:242.708 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":413,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:47:21.448: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a replication controller Jan 31 14:47:21.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 create -f -' Jan 31 14:47:22.000: INFO: stderr: "" Jan 31 14:47:22.000: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 31 14:47:22.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:47:22.214: INFO: stderr: "" Jan 31 14:47:22.214: INFO: stdout: "update-demo-nautilus-4ljcs update-demo-nautilus-rh7zc " Jan 31 14:47:22.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods update-demo-nautilus-4ljcs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:47:22.440: INFO: stderr: "" Jan 31 14:47:22.440: INFO: stdout: "" Jan 31 14:47:22.440: INFO: update-demo-nautilus-4ljcs is created but not running Jan 31 14:47:27.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:47:27.618: INFO: stderr: "" Jan 31 14:47:27.618: INFO: stdout: "update-demo-nautilus-4ljcs update-demo-nautilus-rh7zc " Jan 31 14:47:27.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods update-demo-nautilus-4ljcs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:47:27.812: INFO: stderr: "" Jan 31 14:47:27.812: INFO: stdout: "true" Jan 31 14:47:27.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods update-demo-nautilus-4ljcs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:47:27.995: INFO: stderr: "" Jan 31 14:47:27.995: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:47:27.995: INFO: validating pod update-demo-nautilus-4ljcs Jan 31 14:51:01.032: INFO: update-demo-nautilus-4ljcs is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-4ljcs) Jan 31 14:51:06.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:51:06.216: INFO: stderr: "" Jan 31 14:51:06.216: INFO: stdout: "update-demo-nautilus-4ljcs update-demo-nautilus-rh7zc " Jan 31 14:51:06.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods update-demo-nautilus-4ljcs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:51:06.385: INFO: stderr: "" Jan 31 14:51:06.385: INFO: stdout: "true" Jan 31 14:51:06.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods update-demo-nautilus-4ljcs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:51:06.583: INFO: stderr: "" Jan 31 14:51:06.583: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:51:06.583: INFO: validating pod update-demo-nautilus-4ljcs Jan 31 14:54:40.168: INFO: update-demo-nautilus-4ljcs is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-4ljcs) Jan 31 14:54:45.168: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateController(0x5416760, 0xc000e38f20, 0xc0001359e0, 0x2e, 0x2, 0x4c07034, 0xb, 0x4c1b958, 0x10, 0xc0027458f0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2205 +0xd56 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 +0x2ad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002db9e00, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:54:45.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 delete --grace-period=0 --force -f -' Jan 31 14:54:45.275: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:54:45.276: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 14:54:45.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get rc,svc -l name=update-demo --no-headers' Jan 31 14:54:45.380: INFO: stderr: "No resources found in kubectl-3525 namespace.\n" Jan 31 14:54:45.380: INFO: stdout: "" Jan 31 14:54:45.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 14:54:45.472: INFO: stderr: "" Jan 31 14:54:45.472: INFO: stdout: "update-demo-nautilus-4ljcs\nupdate-demo-nautilus-rh7zc\n" Jan 31 14:54:45.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get rc,svc -l name=update-demo --no-headers' Jan 31 14:54:46.084: INFO: stderr: "No resources found in kubectl-3525 namespace.\n" Jan 31 14:54:46.084: INFO: stdout: "" Jan 31 14:54:46.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3525 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 14:54:46.178: INFO: stderr: "" Jan 31 14:54:46.178: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:54:46.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3525" for this suite. �[91m�[1m• Failure [444.739 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 14:54:45.168: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2205 �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:44:56.570: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns Jan 31 14:44:56.603: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7543.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7543.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7543.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.39.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.39.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.39.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.39.77_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7543.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7543.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7543.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7543.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.39.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.39.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.39.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.39.77_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 31 14:45:06.712: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.716: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.719: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.721: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.724: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.727: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.730: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.733: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.736: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.739: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.742: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.744: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.747: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.749: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.752: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.755: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.758: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.760: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.764: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.766: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:06.766: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:11.772: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.785: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.790: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.794: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.800: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.804: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.807: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.810: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.813: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.817: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.820: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.828: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.831: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.834: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.837: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.841: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.845: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:11.845: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:16.772: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.788: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.792: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.796: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.802: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.808: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.812: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.816: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.819: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.823: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.827: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.831: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.835: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.840: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.845: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.848: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.852: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:16.852: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:21.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.785: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.789: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.791: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.795: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.798: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.801: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.803: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.806: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.809: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.815: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.818: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.821: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.824: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.827: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.830: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:21.830: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:26.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.776: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.779: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.782: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.785: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.788: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.791: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.794: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.796: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.799: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.802: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.805: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.810: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.813: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.816: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.819: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.821: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.825: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:26.825: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:31.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.797: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.800: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.804: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.807: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.815: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.819: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.825: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.828: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.831: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.834: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.837: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.840: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:31.840: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:36.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.794: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.797: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.800: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.804: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.807: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.815: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.817: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.820: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.822: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.824: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.829: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:36.829: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:41.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.796: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.798: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.801: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.804: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.807: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.813: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.816: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.819: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.822: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.825: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.828: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.831: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:41.831: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:46.773: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.814: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.819: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.824: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.829: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.839: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.848: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.857: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.865: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.870: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.874: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.879: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:46.879: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:51.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.783: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.847: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.855: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.865: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.870: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.894: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.901: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.916: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.921: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.925: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.930: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.936: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:51.936: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:45:56.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.806: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.836: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.841: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.845: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.850: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.863: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.868: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.873: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.877: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.882: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.887: INFO: Unable to read 10.142.39.77_udp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.892: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:45:56.892: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_udp@PTR 10.142.39.77_tcp@PTR] Jan 31 14:46:01.772: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.806: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.815: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.822: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.827: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.843: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.849: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.854: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.859: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.865: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.874: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:01.874: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_tcp@PTR] Jan 31 14:46:06.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.816: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.826: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.831: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.838: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.873: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.878: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.884: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.896: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:06.896: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_tcp@PTR] Jan 31 14:46:11.772: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.813: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.825: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.830: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.847: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.856: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.860: INFO: Unable to read jessie_udp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.865: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.876: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:11.876: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7543.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.142.39.77_tcp@PTR] Jan 31 14:46:16.779: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.785: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.824: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.837: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.841: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.859: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.882: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.911: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:16.911: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@PodARecord 10.142.39.77_tcp@PTR] Jan 31 14:46:21.773: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.821: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.832: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.837: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.879: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.889: INFO: Unable to read 10.142.39.77_tcp@PTR from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:21.889: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord 10.142.39.77_tcp@PTR jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local jessie_tcp@PodARecord 10.142.39.77_tcp@PTR] Jan 31 14:46:26.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.821: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.840: INFO: Unable to read jessie_udp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.857: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:26.887: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_udp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord jessie_udp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:46:31.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:31.815: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:31.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:31.846: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:31.877: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:46:36.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:36.814: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:36.845: INFO: Unable to read jessie_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:36.867: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:36.916: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord jessie_tcp@dns-test-service.dns-7543.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:46:41.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:41.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:41.866: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:46:46.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:46.806: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:46.868: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:46:51.794: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:51.868: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:52.079: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:46:56.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:56.814: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:46:56.892: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:01.786: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:01.819: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:01.893: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:06.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:06.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:06.870: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:11.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:11.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:11.868: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:16.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:16.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:16.883: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:21.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:21.816: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:21.901: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:26.789: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:26.822: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:26.895: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:31.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:31.814: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:31.891: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:36.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:36.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:36.868: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:41.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:41.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:41.875: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:46.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:46.818: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:46.883: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:51.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:51.817: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:51.907: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:47:56.790: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:56.842: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:47:56.940: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:01.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:01.822: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:01.908: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:06.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:06.804: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:06.911: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:11.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:11.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:11.886: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:16.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:16.832: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:16.912: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:21.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:21.815: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:21.875: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:26.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:26.805: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:26.869: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:31.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:31.814: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:31.890: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:36.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:36.807: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:36.871: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:41.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:41.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:41.859: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:46.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:46.803: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:46.858: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:51.783: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:51.863: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:51.966: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:48:56.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:56.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:48:56.885: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:01.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:01.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:01.903: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:06.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:06.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:06.883: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:11.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:11.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:11.945: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:16.782: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:16.815: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:16.887: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:21.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:21.819: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:21.904: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:26.788: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:26.865: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:26.931: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:31.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:31.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:31.878: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:36.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:36.807: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:36.873: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:41.776: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:41.808: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:41.865: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:46.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:46.805: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:46.869: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:51.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:51.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:51.890: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:49:56.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:56.813: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:49:56.890: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:01.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:01.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:01.875: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:06.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:06.802: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:06.849: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:11.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:11.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:11.868: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:16.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:16.855: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:16.913: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:21.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:21.811: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:21.874: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:26.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:26.818: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:26.879: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:31.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:31.808: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:31.862: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:36.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:36.812: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:36.877: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:41.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:41.807: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:41.872: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:46.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:46.814: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:46.899: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:51.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:51.808: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:51.867: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:50:56.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:56.806: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:50:56.874: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:51:01.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:01.809: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:01.874: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:51:06.781: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:06.869: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:11.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:11.866: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:16.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:16.864: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:21.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:21.877: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:26.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:26.879: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:31.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:31.834: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:36.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:36.829: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:41.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:41.823: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:46.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:46.828: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:51.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:51.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:51:56.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:51:56.823: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:01.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:01.830: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:06.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:06.828: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:11.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:11.827: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:16.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:16.869: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:21.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:21.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:26.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:26.828: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:31.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:31.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:36.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:36.825: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:41.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:41.830: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:46.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:46.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:51.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:51.827: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:52:56.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:52:56.822: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:01.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:01.826: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:06.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:06.828: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:11.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:11.827: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:16.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:16.827: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:21.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:21.822: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:26.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:26.830: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:31.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:31.837: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:36.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:36.830: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:41.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:41.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:46.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:46.827: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:51.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:51.838: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:53:56.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:53:56.825: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:01.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:01.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:06.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:06.826: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:11.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:11.825: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:16.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:16.832: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:21.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:21.824: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:26.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:26.829: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:31.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:31.825: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:36.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:36.829: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:41.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:41.823: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:46.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:46.835: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:51.788: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:51.880: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:54:56.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:54:56.826: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:55:01.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:55:01.829: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:55:06.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:55:06.828: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:55:06.834: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local from pod dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969: the server could not find the requested resource (get pods dns-test-28f0bb28-2863-4095-8663-db85ffeba969) Jan 31 14:55:06.882: INFO: Lookups using dns-7543/dns-test-28f0bb28-2863-4095-8663-db85ffeba969 failed for: [wheezy_tcp@dns-test-service.dns-7543.svc.cluster.local] Jan 31 14:55:06.883: FAIL: Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001380900, 0x14, 0x18, 0x4bfaebd, 0x7, 0xc002aa6400, 0x5416760, 0xc0028751e0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:539 +0x18a k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:533 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000ed7b80, 0xc002aa6400, 0xc001380900, 0x14, 0x18) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:596 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:181 +0xea5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00345a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00345a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00345a480, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:55:06.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7543" for this suite. �[91m�[1m• Failure [610.436 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 14:55:06.883: Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:539 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:52:14.874: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics W0131 14:52:15.946119 15 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 14:57:15.950: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:15.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3167" for this suite. �[32m• [SLOW TEST:301.085 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":14,"skipped":196,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:15.968: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:57:16.000: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 31 14:57:21.003: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 31 14:57:21.004: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Jan 31 14:57:21.021: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9714 /apis/apps/v1/namespaces/deployment-9714/deployments/test-cleanup-deployment 7a5ec39b-9925-4319-8d0e-5576487a1766 7054 1 2023-01-31 14:57:21 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-31 14:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005235fe8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 31 14:57:21.026: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-9714 /apis/apps/v1/namespaces/deployment-9714/replicasets/test-cleanup-deployment-5d446bdd47 39b0c9bb-80ef-4d03-8209-159d7cd695d2 7059 1 2023-01-31 14:57:21 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7a5ec39b-9925-4319-8d0e-5576487a1766 0xc0062b7427 0xc0062b7428}] [] [{kube-controller-manager Update apps/v1 2023-01-31 14:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a5ec39b-9925-4319-8d0e-5576487a1766\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0062b74b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 14:57:21.026: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 31 14:57:21.027: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9714 /apis/apps/v1/namespaces/deployment-9714/replicasets/test-cleanup-controller 43ce5c57-249d-4d11-b3c0-10a5f7259b45 7057 1 2023-01-31 14:57:15 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7a5ec39b-9925-4319-8d0e-5576487a1766 0xc0062b7327 0xc0062b7328}] [] [{e2e.test Update apps/v1 2023-01-31 14:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-31 14:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"7a5ec39b-9925-4319-8d0e-5576487a1766\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062b73c8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 14:57:21.033: INFO: Pod "test-cleanup-controller-522zw" is available: &Pod{ObjectMeta:{test-cleanup-controller-522zw test-cleanup-controller- deployment-9714 /api/v1/namespaces/deployment-9714/pods/test-cleanup-controller-522zw b9e2fea0-aad0-48fa-8aa5-4ac6df1c60b2 7031 0 2023-01-31 14:57:16 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 43ce5c57-249d-4d11-b3c0-10a5f7259b45 0xc00367657f 0xc003676590}] [] [{kube-controller-manager Update v1 2023-01-31 14:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43ce5c57-249d-4d11-b3c0-10a5f7259b45\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-31 14:57:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcqsh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcqsh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcqsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-d8uk6o-worker-z043bi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:57:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 14:57:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.31,StartTime:2023-01-31 14:57:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-31 14:57:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://53c614710ed3b4eac09f68de542feffca22f3be3bd0224a6fdf6ca34822c107e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 14:57:21.034: INFO: Pod "test-cleanup-deployment-5d446bdd47-pzc2h" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-pzc2h test-cleanup-deployment-5d446bdd47- deployment-9714 /api/v1/namespaces/deployment-9714/pods/test-cleanup-deployment-5d446bdd47-pzc2h 06ca0251-2f3d-4c93-a33f-baaea89bc474 7060 0 2023-01-31 14:57:21 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 39b0c9bb-80ef-4d03-8209-159d7cd695d2 0xc0036767b7 0xc0036767b8}] [] [{kube-controller-manager Update v1 2023-01-31 14:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39b0c9bb-80ef-4d03-8209-159d7cd695d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcqsh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcqsh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcqsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:21.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9714" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":15,"skipped":201,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:21.078: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod Jan 31 14:57:21.105: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-668" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":16,"skipped":217,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:25.213: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test override all Jan 31 14:57:25.244: INFO: Waiting up to 5m0s for pod "client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449" in namespace "containers-1855" to be "Succeeded or Failed" Jan 31 14:57:25.247: INFO: Pod "client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449": Phase="Pending", Reason="", readiness=false. Elapsed: 2.786985ms Jan 31 14:57:27.251: INFO: Pod "client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006828708s �[1mSTEP�[0m: Saw pod success Jan 31 14:57:27.251: INFO: Pod "client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449" satisfied condition "Succeeded or Failed" Jan 31 14:57:27.253: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:57:27.275: INFO: Waiting for pod client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449 to disappear Jan 31 14:57:27.280: INFO: Pod client-containers-0f8a554a-3a3f-4b9b-a478-868d1f9b8449 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:27.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-1855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":220,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:27.308: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:27.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1024" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":18,"skipped":233,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:27.378: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 31 14:57:27.410: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:33.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1872" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":248,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:53:30.774: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod busybox-72a04f92-52eb-4f4a-a3a7-9ea32ba1ce6b in namespace container-probe-9038 Jan 31 14:53:32.824: INFO: Started pod busybox-72a04f92-52eb-4f4a-a3a7-9ea32ba1ce6b in namespace container-probe-9038 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 31 14:53:32.826: INFO: Initial restart count of pod busybox-72a04f92-52eb-4f4a-a3a7-9ea32ba1ce6b is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:33.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9038" for this suite. �[32m• [SLOW TEST:242.545 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":417,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:33.169: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:35.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-4961" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":20,"skipped":276,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:35.391: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 31 14:57:35.437: INFO: Waiting up to 5m0s for pod "pod-61812d76-1d72-416c-bf41-104f3372d918" in namespace "emptydir-228" to be "Succeeded or Failed" Jan 31 14:57:35.442: INFO: Pod "pod-61812d76-1d72-416c-bf41-104f3372d918": Phase="Pending", Reason="", readiness=false. Elapsed: 4.992871ms Jan 31 14:57:37.446: INFO: Pod "pod-61812d76-1d72-416c-bf41-104f3372d918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009690947s �[1mSTEP�[0m: Saw pod success Jan 31 14:57:37.446: INFO: Pod "pod-61812d76-1d72-416c-bf41-104f3372d918" satisfied condition "Succeeded or Failed" Jan 31 14:57:37.449: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-nqv5p pod pod-61812d76-1d72-416c-bf41-104f3372d918 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:57:37.487: INFO: Waiting for pod pod-61812d76-1d72-416c-bf41-104f3372d918 to disappear Jan 31 14:57:37.490: INFO: Pod pod-61812d76-1d72-416c-bf41-104f3372d918 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:37.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-228" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":325,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:33.338: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating all guestbook components Jan 31 14:57:33.363: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 31 14:57:33.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:33.671: INFO: stderr: "" Jan 31 14:57:33.671: INFO: stdout: "service/agnhost-replica created\n" Jan 31 14:57:33.671: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 31 14:57:33.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:34.030: INFO: stderr: "" Jan 31 14:57:34.030: INFO: stdout: "service/agnhost-primary created\n" Jan 31 14:57:34.030: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 31 14:57:34.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:34.288: INFO: stderr: "" Jan 31 14:57:34.288: INFO: stdout: "service/frontend created\n" Jan 31 14:57:34.288: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 31 14:57:34.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:34.513: INFO: stderr: "" Jan 31 14:57:34.514: INFO: stdout: "deployment.apps/frontend created\n" Jan 31 14:57:34.514: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 31 14:57:34.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:34.750: INFO: stderr: "" Jan 31 14:57:34.750: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 31 14:57:34.751: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 31 14:57:34.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 create -f -' Jan 31 14:57:35.062: INFO: stderr: "" Jan 31 14:57:35.062: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 31 14:57:35.062: INFO: Waiting for all frontend pods to be Running. Jan 31 14:57:40.112: INFO: Waiting for frontend to serve content. Jan 31 14:57:40.121: INFO: Trying to add a new entry to the guestbook. Jan 31 14:57:40.131: INFO: Verifying that added entry can be retrieved. �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.292: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.293: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.448: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.580: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.674: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.818: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 31 14:57:40.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7889 delete --grace-period=0 --force -f -' Jan 31 14:57:40.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 14:57:40.957: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7889" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":20,"skipped":431,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:37.527: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating server pod server in namespace prestop-8340 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-8340 �[1mSTEP�[0m: Deleting pre-stop pod Jan 31 14:57:46.589: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:46.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-8340" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":22,"skipped":344,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:46.645: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 14:57:46.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6921 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 31 14:57:46.802: INFO: stderr: "" Jan 31 14:57:46.802: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 31 14:57:46.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6921 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 31 14:57:47.082: INFO: stderr: "" Jan 31 14:57:47.082: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 31 14:57:47.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6921 delete pods e2e-test-httpd-pod' Jan 31 14:57:53.484: INFO: stderr: "" Jan 31 14:57:53.484: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:53.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6921" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":23,"skipped":365,"failed":0} [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:53.494: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test substitution in volume subpath Jan 31 14:57:53.524: INFO: Waiting up to 5m0s for pod "var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498" in namespace "var-expansion-4923" to be "Succeeded or Failed" Jan 31 14:57:53.530: INFO: Pod "var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336592ms Jan 31 14:57:55.534: INFO: Pod "var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009818071s �[1mSTEP�[0m: Saw pod success Jan 31 14:57:55.534: INFO: Pod "var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498" satisfied condition "Succeeded or Failed" Jan 31 14:57:55.537: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:57:55.551: INFO: Waiting for pod var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498 to disappear Jan 31 14:57:55.554: INFO: Pod var-expansion-c9fcfb43-5b8f-4847-b62b-ba37f0b99498 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:55.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4923" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":24,"skipped":365,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:55.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 31 14:57:55.604: INFO: Waiting up to 5m0s for pod "pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a" in namespace "emptydir-5376" to be "Succeeded or Failed" Jan 31 14:57:55.606: INFO: Pod "pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142043ms Jan 31 14:57:57.612: INFO: Pod "pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008310336s �[1mSTEP�[0m: Saw pod success Jan 31 14:57:57.612: INFO: Pod "pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a" satisfied condition "Succeeded or Failed" Jan 31 14:57:57.615: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:57:57.630: INFO: Waiting for pod pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a to disappear Jan 31 14:57:57.633: INFO: Pod pod-eb099ab8-9581-412d-a6c9-b1ca8638da3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:57:57.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5376" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":370,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:57.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-9272 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-9272 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-9272 I0131 14:57:57.745763 15 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9272, replica count: 2 I0131 14:58:00.796262 15 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Jan 31 14:58:00.822: INFO: Creating new exec pod Jan 31 14:58:02.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9272 exec execpodrqh8s -- /bin/sh -x -c nslookup nodeport-service.services-9272.svc.cluster.local' Jan 31 14:58:03.057: INFO: stderr: "+ nslookup nodeport-service.services-9272.svc.cluster.local\n" Jan 31 14:58:03.058: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-9272.svc.cluster.local\tcanonical name = externalsvc.services-9272.svc.cluster.local.\nName:\texternalsvc.services-9272.svc.cluster.local\nAddress: 10.132.200.216\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-9272, will wait for the garbage collector to delete the pods Jan 31 14:58:03.117: INFO: Deleting ReplicationController externalsvc took: 6.106556ms Jan 31 14:58:03.618: INFO: Terminating ReplicationController externalsvc pods took: 500.299045ms Jan 31 14:58:07.355: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:58:07.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9272" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":26,"skipped":393,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:57:41.044: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:57:41.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:57:44.512: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:57:44.516: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-4693-crds.webhook.example.com via the AdmissionRegistration API Jan 31 14:57:55.040: INFO: Waiting for webhook configuration to be ready... Jan 31 14:58:05.154: INFO: Waiting for webhook configuration to be ready... Jan 31 14:58:15.253: INFO: Waiting for webhook configuration to be ready... Jan 31 14:58:25.351: INFO: Waiting for webhook configuration to be ready... Jan 31 14:58:35.360: INFO: Waiting for webhook configuration to be ready... Jan 31 14:58:35.361: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002641f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForCustomResource(0xc000db0b00, 0xc0010f8530, 0xc, 0xc001e0b9f0, 0xc0004e9780, 0x20fb, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826 +0xc6a k8s.io/kubernetes/test/e2e/apimachinery.glob..func22.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:294 +0xc9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00353c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00353c180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00353c180, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:58:35.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7370" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7370-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.903 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate custom resource [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 14:58:35.361: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002641f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":20,"skipped":481,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:58:35.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:58:36.684: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:58:39.711: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:58:39.716: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7983-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:58:40.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9576" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9576-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":21,"skipped":481,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:58:07.409: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-5710, will wait for the garbage collector to delete the pods Jan 31 14:58:11.535: INFO: Deleting Job.batch foo took: 5.83032ms Jan 31 14:58:11.635: INFO: Terminating Job.batch foo pods took: 100.25397ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:58:48.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-5710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":27,"skipped":399,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:58:48.764: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 14:58:49.328: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 14:58:52.348: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:02.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7391" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7391-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":28,"skipped":411,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:58:40.986: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod liveness-aeb8b1b6-8028-4396-bdf8-838f213fa13c in namespace container-probe-6060 Jan 31 14:58:43.038: INFO: Started pod liveness-aeb8b1b6-8028-4396-bdf8-838f213fa13c in namespace container-probe-6060 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 31 14:58:43.041: INFO: Initial restart count of pod liveness-aeb8b1b6-8028-4396-bdf8-838f213fa13c is 0 Jan 31 14:59:03.080: INFO: Restart count of pod container-probe-6060/liveness-aeb8b1b6-8028-4396-bdf8-838f213fa13c is now 1 (20.039518962s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:03.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6060" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":504,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:03.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-b1a132c9-5152-4a47-92d9-4da7cc0a3042 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 14:59:03.169: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5" in namespace "projected-2257" to be "Succeeded or Failed" Jan 31 14:59:03.173: INFO: Pod "pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223011ms Jan 31 14:59:05.177: INFO: Pod "pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00672738s �[1mSTEP�[0m: Saw pod success Jan 31 14:59:05.177: INFO: Pod "pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5" satisfied condition "Succeeded or Failed" Jan 31 14:59:05.179: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:59:05.197: INFO: Waiting for pod pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5 to disappear Jan 31 14:59:05.200: INFO: Pod pod-projected-secrets-c4f358d9-6d7c-4e37-974b-0a99ee2c9ef5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:05.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":520,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:05.228: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 31 14:59:07.772: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3171 pod-service-account-c193fa2e-c244-4f8e-9e25-5a85ce040902 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 31 14:59:07.944: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3171 pod-service-account-c193fa2e-c244-4f8e-9e25-5a85ce040902 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 31 14:59:08.110: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3171 pod-service-account-c193fa2e-c244-4f8e-9e25-5a85ce040902 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:08.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3171" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":24,"skipped":531,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:08.409: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Waiting for the pod running �[1mSTEP�[0m: Geting the pod �[1mSTEP�[0m: Reading file content from the nginx-container Jan 31 14:59:10.462: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1151 PodName:pod-sharedvolume-f4a05d66-5a9b-4e10-a04f-277feed937f0 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 14:59:10.462: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 14:59:10.538: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:10.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1151" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":25,"skipped":586,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:10.552: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8696" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":26,"skipped":589,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:10.636: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 14:59:10.661: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 31 14:59:13.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9744 --namespace=crd-publish-openapi-9744 create -f -' Jan 31 14:59:14.202: INFO: stderr: "" Jan 31 14:59:14.203: INFO: stdout: "e2e-test-crd-publish-openapi-5091-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 31 14:59:14.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9744 --namespace=crd-publish-openapi-9744 delete e2e-test-crd-publish-openapi-5091-crds test-cr' Jan 31 14:59:14.310: INFO: stderr: "" Jan 31 14:59:14.310: INFO: stdout: "e2e-test-crd-publish-openapi-5091-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 31 14:59:14.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9744 --namespace=crd-publish-openapi-9744 apply -f -' Jan 31 14:59:14.550: INFO: stderr: "" Jan 31 14:59:14.550: INFO: stdout: "e2e-test-crd-publish-openapi-5091-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 31 14:59:14.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9744 --namespace=crd-publish-openapi-9744 delete e2e-test-crd-publish-openapi-5091-crds test-cr' Jan 31 14:59:14.648: INFO: stderr: "" Jan 31 14:59:14.648: INFO: stdout: "e2e-test-crd-publish-openapi-5091-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 31 14:59:14.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9744 explain e2e-test-crd-publish-openapi-5091-crds' Jan 31 14:59:14.885: INFO: stderr: "" Jan 31 14:59:14.885: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5091-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:17.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9744" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":27,"skipped":623,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:17.822: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 14:59:17.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5" in namespace "projected-1984" to be "Succeeded or Failed" Jan 31 14:59:17.859: INFO: Pod "downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382512ms Jan 31 14:59:19.862: INFO: Pod "downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006154185s �[1mSTEP�[0m: Saw pod success Jan 31 14:59:19.862: INFO: Pod "downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5" satisfied condition "Succeeded or Failed" Jan 31 14:59:19.865: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:59:19.881: INFO: Waiting for pod downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5 to disappear Jan 31 14:59:19.884: INFO: Pod downwardapi-volume-7cd379aa-c178-46a3-80fd-7f9461e6cca5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:19.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1984" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":633,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:19.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name secret-test-map-a880ca67-c218-40b0-a51b-e282f16ac6e2 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 14:59:19.954: INFO: Waiting up to 5m0s for pod "pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c" in namespace "secrets-8678" to be "Succeeded or Failed" Jan 31 14:59:19.957: INFO: Pod "pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.653782ms Jan 31 14:59:21.961: INFO: Pod "pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007143666s �[1mSTEP�[0m: Saw pod success Jan 31 14:59:21.961: INFO: Pod "pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c" satisfied condition "Succeeded or Failed" Jan 31 14:59:21.964: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:59:21.979: INFO: Waiting for pod pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c to disappear Jan 31 14:59:21.982: INFO: Pod pod-secrets-43ad6180-65ad-410f-9d55-b6e32373ea9c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:21.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8678" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":656,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:22.001: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-13c323e4-5ad0-4e2f-949a-c9e243de09a0 �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-80f159c7-176f-4fb5-a4e5-4656354b1566 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 31 14:59:22.039: INFO: Waiting up to 5m0s for pod "projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067" in namespace "projected-2001" to be "Succeeded or Failed" Jan 31 14:59:22.041: INFO: Pod "projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12123ms Jan 31 14:59:24.045: INFO: Pod "projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005789142s �[1mSTEP�[0m: Saw pod success Jan 31 14:59:24.045: INFO: Pod "projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067" satisfied condition "Succeeded or Failed" Jan 31 14:59:24.048: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067 container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 14:59:24.063: INFO: Waiting for pod projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067 to disappear Jan 31 14:59:24.066: INFO: Pod projected-volume-40885c72-fb7c-4a1e-b4fb-c8b8611ad067 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2001" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":663,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:02.612: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod busybox-2db2ddea-6bf4-44b2-8484-60bcd531ae02 in namespace container-probe-7863 Jan 31 14:59:04.651: INFO: Started pod busybox-2db2ddea-6bf4-44b2-8484-60bcd531ae02 in namespace container-probe-7863 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 31 14:59:04.654: INFO: Initial restart count of pod busybox-2db2ddea-6bf4-44b2-8484-60bcd531ae02 is 0 Jan 31 14:59:58.768: INFO: Restart count of pod container-probe-7863/busybox-2db2ddea-6bf4-44b2-8484-60bcd531ae02 is now 1 (54.113864493s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 14:59:58.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7863" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":467,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:58.802: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap configmap-6605/configmap-test-7aa62049-4cf1-4f28-a6ad-856d4185d0b3 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 14:59:58.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956" in namespace "configmap-6605" to be "Succeeded or Failed" Jan 31 14:59:58.854: INFO: Pod "pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956": Phase="Pending", Reason="", readiness=false. Elapsed: 8.572221ms Jan 31 15:00:00.858: INFO: Pod "pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012888512s �[1mSTEP�[0m: Saw pod success Jan 31 15:00:00.858: INFO: Pod "pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956" satisfied condition "Succeeded or Failed" Jan 31 15:00:00.861: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:00:00.880: INFO: Waiting for pod pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956 to disappear Jan 31 15:00:00.882: INFO: Pod pod-configmaps-39ffe215-c4a2-454c-9087-dd1afb765956 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:00.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6605" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":471,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:00.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 15:00:01.406: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 31 15:00:03.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774001, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774001, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774001, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774001, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 15:00:06.429: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:00:06.434: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:07.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-4808" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":31,"skipped":482,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:07.752: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-5390 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 31 15:00:07.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 15:00:07.821: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 15:00:09.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:11.824: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:13.826: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:15.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:17.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:19.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:21.824: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 15:00:23.825: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 15:00:23.834: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 31 15:00:23.839: INFO: The status of Pod netserver-2 is Running (Ready = false) Jan 31 15:00:25.843: INFO: The status of Pod netserver-2 is Running (Ready = false) Jan 31 15:00:27.843: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 31 15:00:27.848: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 31 15:00:29.878: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.0.29:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:00:29.878: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:00:29.967: INFO: Found all expected endpoints: [netserver-0] Jan 31 15:00:29.970: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.11:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:00:29.970: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:00:30.056: INFO: Found all expected endpoints: [netserver-1] Jan 31 15:00:30.059: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.14:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:00:30.059: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:00:30.150: INFO: Found all expected endpoints: [netserver-2] Jan 31 15:00:30.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.6.59:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:00:30.153: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:00:30.227: INFO: Found all expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:30.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-5390" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":509,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:30.315: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 31 15:00:30.341: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:00:33.255: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:44.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-1431" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":33,"skipped":566,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:44.554: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-261348e0-096c-494a-aa63-b05a7fdbc7a2 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-1f2e583e-ca36-4e2b-8ba7-32adbddd45be �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting secret s-test-opt-del-261348e0-096c-494a-aa63-b05a7fdbc7a2 �[1mSTEP�[0m: Updating secret s-test-opt-upd-1f2e583e-ca36-4e2b-8ba7-32adbddd45be �[1mSTEP�[0m: Creating secret with name s-test-opt-create-1a8ecad2-6aac-4a36-87aa-b922da08f8f8 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:48.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-396" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":585,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:48.682: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: validating cluster-info Jan 31 15:00:48.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7912 cluster-info' Jan 31 15:00:48.801: INFO: stderr: "" Jan 31 15:00:48.801: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:48.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7912" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":-1,"completed":35,"skipped":591,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:48.869: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:00:48.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85" in namespace "projected-7678" to be "Succeeded or Failed" Jan 31 15:00:48.904: INFO: Pod "downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465822ms Jan 31 15:00:50.908: INFO: Pod "downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00631781s �[1mSTEP�[0m: Saw pod success Jan 31 15:00:50.908: INFO: Pod "downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85" satisfied condition "Succeeded or Failed" Jan 31 15:00:50.911: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-6xi31i pod downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:00:50.938: INFO: Waiting for pod downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85 to disappear Jan 31 15:00:50.941: INFO: Pod downwardapi-volume-ce98e480-7d82-4334-88ab-218aebd36a85 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:50.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7678" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":629,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:59:24.138: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-6277 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-6277 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-6277 Jan 31 14:59:24.188: INFO: Found 0 stateful pods, waiting for 1 Jan 31 14:59:34.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 31 14:59:34.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 14:59:34.369: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 14:59:34.369: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 14:59:34.369: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 14:59:34.373: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 31 14:59:44.376: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 14:59:44.377: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 14:59:44.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999674s Jan 31 14:59:45.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997115429s Jan 31 14:59:46.396: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993215594s Jan 31 14:59:47.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988890428s Jan 31 14:59:48.406: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.9844377s Jan 31 14:59:49.410: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97947438s Jan 31 14:59:50.414: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.975421541s Jan 31 14:59:51.418: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971701182s Jan 31 14:59:52.422: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.967256305s Jan 31 14:59:53.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 963.547117ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6277 Jan 31 14:59:54.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 14:59:54.612: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 31 14:59:54.613: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 14:59:54.613: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 14:59:54.616: INFO: Found 1 stateful pods, waiting for 3 Jan 31 15:00:04.619: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:00:04.620: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:00:04.620: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 31 15:00:04.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 15:00:04.794: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 15:00:04.794: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 15:00:04.794: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 15:00:04.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 15:00:04.962: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 15:00:04.962: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 15:00:04.962: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 15:00:04.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 15:00:05.134: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 31 15:00:05.134: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 15:00:05.134: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 15:00:05.134: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 15:00:05.137: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jan 31 15:00:15.144: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 15:00:15.144: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 31 15:00:15.144: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 31 15:00:15.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999585s Jan 31 15:00:16.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996016625s Jan 31 15:00:17.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991545873s Jan 31 15:00:18.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987561843s Jan 31 15:00:19.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983056227s Jan 31 15:00:20.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978989002s Jan 31 15:00:21.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97487434s Jan 31 15:00:22.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970622073s Jan 31 15:00:23.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966515804s Jan 31 15:00:24.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.311877ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6277 Jan 31 15:00:25.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 15:00:25.367: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 31 15:00:25.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 15:00:25.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 15:00:25.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 15:00:25.553: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 31 15:00:25.553: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 15:00:25.553: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 15:00:25.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6277 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 15:00:25.721: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 31 15:00:25.721: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 15:00:25.721: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 15:00:25.721: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 15:00:55.735: INFO: Deleting all statefulset in ns statefulset-6277 Jan 31 15:00:55.737: INFO: Scaling statefulset ss to 0 Jan 31 15:00:55.746: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 15:00:55.748: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:55.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6277" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":31,"skipped":713,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:55.773: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 31 15:00:55.806: INFO: Waiting up to 5m0s for pod "downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a" in namespace "downward-api-3448" to be "Succeeded or Failed" Jan 31 15:00:55.809: INFO: Pod "downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.723736ms Jan 31 15:00:57.813: INFO: Pod "downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006836587s �[1mSTEP�[0m: Saw pod success Jan 31 15:00:57.813: INFO: Pod "downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a" satisfied condition "Succeeded or Failed" Jan 31 15:00:57.816: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:00:57.834: INFO: Waiting for pod downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a to disappear Jan 31 15:00:57.837: INFO: Pod downward-api-439e6b65-3348-45d2-aac0-a26ed6f3937a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:57.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3448" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":714,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:57.874: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 31 15:00:57.921: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2628 /api/v1/namespaces/watch-2628/configmaps/e2e-watch-test-resource-version 9066f2b0-975a-4e49-b9fa-d1656b25195d 9376 0 2023-01-31 15:00:57 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-31 15:00:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 15:00:57.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2628 /api/v1/namespaces/watch-2628/configmaps/e2e-watch-test-resource-version 9066f2b0-975a-4e49-b9fa-d1656b25195d 9377 0 2023-01-31 15:00:57 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-31 15:00:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:00:57.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-2628" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":33,"skipped":738,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:57.932: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-4j24m in namespace proxy-830 I0131 15:00:57.977960 14 runners.go:190] Created replication controller with name: proxy-service-4j24m, namespace: proxy-830, replica count: 1 I0131 15:00:59.028797 14 runners.go:190] proxy-service-4j24m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0131 15:01:00.029046 14 runners.go:190] proxy-service-4j24m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 15:01:00.032: INFO: setup took 2.076081613s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Jan 31 15:01:00.042: INFO: (0) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 9.131522ms) Jan 31 15:01:00.043: INFO: (0) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 10.648235ms) Jan 31 15:01:00.043: INFO: (0) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 10.948878ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 17.674864ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 17.682842ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 17.712594ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 17.670462ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 17.849097ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 17.728885ms) Jan 31 15:01:00.050: INFO: (0) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 17.933725ms) Jan 31 15:01:00.054: INFO: (0) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 21.752379ms) Jan 31 15:01:00.054: INFO: (0) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 21.736373ms) Jan 31 15:01:00.055: INFO: (0) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 21.8862ms) Jan 31 15:01:00.055: INFO: (0) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 22.511375ms) Jan 31 15:01:00.055: INFO: (0) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 22.362804ms) Jan 31 15:01:00.055: INFO: (0) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 22.929837ms) Jan 31 15:01:00.061: INFO: (1) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 4.802499ms) Jan 31 15:01:00.061: INFO: (1) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 5.475747ms) Jan 31 15:01:00.061: INFO: (1) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 5.37014ms) Jan 31 15:01:00.061: INFO: (1) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.287294ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 5.580896ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 5.814274ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 5.620966ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 5.958296ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.171655ms) Jan 31 15:01:00.062: INFO: (1) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 6.429322ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 7.771267ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.978791ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 8.324489ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 8.68488ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 8.203096ms) Jan 31 15:01:00.064: INFO: (1) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 8.549565ms) Jan 31 15:01:00.072: INFO: (2) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 7.090858ms) Jan 31 15:01:00.072: INFO: (2) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.177149ms) Jan 31 15:01:00.072: INFO: (2) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.678306ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 8.174559ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 8.079195ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 8.295834ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 8.080039ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 8.201683ms) Jan 31 15:01:00.073: INFO: (2) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 8.26407ms) Jan 31 15:01:00.075: INFO: (2) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 10.340336ms) Jan 31 15:01:00.075: INFO: (2) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 10.335176ms) Jan 31 15:01:00.076: INFO: (2) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 11.149813ms) Jan 31 15:01:00.076: INFO: (2) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 11.127906ms) Jan 31 15:01:00.076: INFO: (2) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 11.361056ms) Jan 31 15:01:00.076: INFO: (2) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 11.411563ms) Jan 31 15:01:00.076: INFO: (2) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 11.341818ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.791045ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.686558ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 6.764208ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 6.746035ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 7.003196ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 6.731532ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 6.809206ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 6.801172ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 6.741795ms) Jan 31 15:01:00.083: INFO: (3) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 7.105968ms) Jan 31 15:01:00.084: INFO: (3) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.354349ms) Jan 31 15:01:00.084: INFO: (3) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 7.417546ms) Jan 31 15:01:00.084: INFO: (3) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.689591ms) Jan 31 15:01:00.086: INFO: (3) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 9.187251ms) Jan 31 15:01:00.086: INFO: (3) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 9.265527ms) Jan 31 15:01:00.086: INFO: (3) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 9.531766ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 6.799037ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.609557ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 6.632979ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.002823ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 6.783187ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 7.028188ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 7.29302ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 7.603992ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.282666ms) Jan 31 15:01:00.093: INFO: (4) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.482242ms) Jan 31 15:01:00.094: INFO: (4) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 7.968927ms) Jan 31 15:01:00.095: INFO: (4) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 8.986829ms) Jan 31 15:01:00.095: INFO: (4) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 9.107796ms) Jan 31 15:01:00.095: INFO: (4) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 9.388207ms) Jan 31 15:01:00.095: INFO: (4) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 9.399582ms) Jan 31 15:01:00.096: INFO: (4) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 9.476852ms) Jan 31 15:01:00.098: INFO: (5) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 2.319112ms) Jan 31 15:01:00.104: INFO: (5) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.884713ms) Jan 31 15:01:00.104: INFO: (5) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 8.493613ms) Jan 31 15:01:00.104: INFO: (5) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 8.494688ms) Jan 31 15:01:00.105: INFO: (5) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 9.084694ms) Jan 31 15:01:00.105: INFO: (5) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 9.431162ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 9.925727ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 10.057368ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 10.008766ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 10.005008ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 9.95311ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 10.127173ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 10.161038ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 9.922673ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 10.081204ms) Jan 31 15:01:00.106: INFO: (5) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 10.380848ms) Jan 31 15:01:00.113: INFO: (6) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.163891ms) Jan 31 15:01:00.113: INFO: (6) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 6.369326ms) Jan 31 15:01:00.113: INFO: (6) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 6.082151ms) Jan 31 15:01:00.113: INFO: (6) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 6.57861ms) Jan 31 15:01:00.114: INFO: (6) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.312482ms) Jan 31 15:01:00.115: INFO: (6) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 8.599009ms) Jan 31 15:01:00.115: INFO: (6) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 8.975129ms) Jan 31 15:01:00.116: INFO: (6) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 8.992131ms) Jan 31 15:01:00.116: INFO: (6) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 8.99342ms) Jan 31 15:01:00.116: INFO: (6) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 9.956264ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 10.114098ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 10.084915ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 10.159459ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 10.58905ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 10.735439ms) Jan 31 15:01:00.117: INFO: (6) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 11.001597ms) Jan 31 15:01:00.122: INFO: (7) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 4.543568ms) Jan 31 15:01:00.122: INFO: (7) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 4.590628ms) Jan 31 15:01:00.122: INFO: (7) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 4.507315ms) Jan 31 15:01:00.122: INFO: (7) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 4.664615ms) Jan 31 15:01:00.122: INFO: (7) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 4.597319ms) Jan 31 15:01:00.123: INFO: (7) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 5.063341ms) Jan 31 15:01:00.123: INFO: (7) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.137657ms) Jan 31 15:01:00.123: INFO: (7) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 5.058502ms) Jan 31 15:01:00.124: INFO: (7) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 6.025106ms) Jan 31 15:01:00.124: INFO: (7) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.554663ms) Jan 31 15:01:00.125: INFO: (7) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 7.415009ms) Jan 31 15:01:00.125: INFO: (7) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 7.571396ms) Jan 31 15:01:00.125: INFO: (7) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 7.523924ms) Jan 31 15:01:00.125: INFO: (7) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 7.594863ms) Jan 31 15:01:00.125: INFO: (7) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 7.559712ms) Jan 31 15:01:00.126: INFO: (7) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.967051ms) Jan 31 15:01:00.131: INFO: (8) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 5.617911ms) Jan 31 15:01:00.131: INFO: (8) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 5.759956ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.861776ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.751486ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 5.90159ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 5.844037ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 5.990591ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 5.831751ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 6.007297ms) Jan 31 15:01:00.132: INFO: (8) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.08053ms) Jan 31 15:01:00.134: INFO: (8) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 7.780122ms) Jan 31 15:01:00.134: INFO: (8) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 7.969028ms) Jan 31 15:01:00.134: INFO: (8) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 8.096357ms) Jan 31 15:01:00.134: INFO: (8) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 8.00186ms) Jan 31 15:01:00.135: INFO: (8) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 8.795006ms) Jan 31 15:01:00.135: INFO: (8) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 8.80922ms) Jan 31 15:01:00.141: INFO: (9) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 6.351194ms) Jan 31 15:01:00.141: INFO: (9) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.380467ms) Jan 31 15:01:00.141: INFO: (9) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 6.512315ms) Jan 31 15:01:00.141: INFO: (9) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 6.501814ms) Jan 31 15:01:00.141: INFO: (9) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.568892ms) Jan 31 15:01:00.142: INFO: (9) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 7.029571ms) Jan 31 15:01:00.142: INFO: (9) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.151862ms) Jan 31 15:01:00.142: INFO: (9) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 6.988846ms) Jan 31 15:01:00.142: INFO: (9) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 7.106934ms) Jan 31 15:01:00.142: INFO: (9) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.089747ms) Jan 31 15:01:00.144: INFO: (9) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 9.257049ms) Jan 31 15:01:00.144: INFO: (9) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 9.256688ms) Jan 31 15:01:00.144: INFO: (9) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 9.239744ms) Jan 31 15:01:00.144: INFO: (9) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 9.540939ms) Jan 31 15:01:00.145: INFO: (9) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 9.860722ms) Jan 31 15:01:00.146: INFO: (9) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 11.788452ms) Jan 31 15:01:00.152: INFO: (10) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.021945ms) Jan 31 15:01:00.152: INFO: (10) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 5.366732ms) Jan 31 15:01:00.156: INFO: (10) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 8.986742ms) Jan 31 15:01:00.156: INFO: (10) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 9.462279ms) Jan 31 15:01:00.161: INFO: (10) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 13.980785ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 16.066511ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 16.305437ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 16.154722ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 16.711691ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 16.797604ms) Jan 31 15:01:00.163: INFO: (10) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 16.763255ms) Jan 31 15:01:00.164: INFO: (10) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 17.07636ms) Jan 31 15:01:00.164: INFO: (10) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 16.929422ms) Jan 31 15:01:00.164: INFO: (10) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 17.20845ms) Jan 31 15:01:00.164: INFO: (10) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 17.142202ms) Jan 31 15:01:00.164: INFO: (10) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 17.127145ms) Jan 31 15:01:00.170: INFO: (11) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 5.951335ms) Jan 31 15:01:00.170: INFO: (11) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.237441ms) Jan 31 15:01:00.171: INFO: (11) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 6.633661ms) Jan 31 15:01:00.171: INFO: (11) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 7.273618ms) Jan 31 15:01:00.171: INFO: (11) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 7.312798ms) Jan 31 15:01:00.171: INFO: (11) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 7.391669ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 7.606749ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.515161ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 7.727868ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.877562ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 8.087671ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 8.238327ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 8.355543ms) Jan 31 15:01:00.172: INFO: (11) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 8.438093ms) Jan 31 15:01:00.173: INFO: (11) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 8.598007ms) Jan 31 15:01:00.173: INFO: (11) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 8.634922ms) Jan 31 15:01:00.177: INFO: (12) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 3.618783ms) Jan 31 15:01:00.179: INFO: (12) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.641391ms) Jan 31 15:01:00.179: INFO: (12) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 5.67849ms) Jan 31 15:01:00.179: INFO: (12) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 5.746739ms) Jan 31 15:01:00.179: INFO: (12) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.340326ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 7.545805ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 7.347463ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 7.322108ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 7.372139ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 7.455085ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 7.545106ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 7.4195ms) Jan 31 15:01:00.180: INFO: (12) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 7.537114ms) Jan 31 15:01:00.181: INFO: (12) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 7.883222ms) Jan 31 15:01:00.181: INFO: (12) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 8.064664ms) Jan 31 15:01:00.181: INFO: (12) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 8.293417ms) Jan 31 15:01:00.187: INFO: (13) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.9227ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 6.268947ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 6.378744ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 6.382022ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.606768ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.376229ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 6.375257ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 6.484779ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 6.77686ms) Jan 31 15:01:00.188: INFO: (13) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 7.009694ms) Jan 31 15:01:00.189: INFO: (13) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 7.909918ms) Jan 31 15:01:00.191: INFO: (13) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 9.38709ms) Jan 31 15:01:00.191: INFO: (13) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 9.316021ms) Jan 31 15:01:00.191: INFO: (13) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 9.408792ms) Jan 31 15:01:00.191: INFO: (13) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 9.479226ms) Jan 31 15:01:00.191: INFO: (13) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 9.402984ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 6.074608ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 6.298303ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.677237ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 5.429736ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 5.910219ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.416428ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 6.029089ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 5.992613ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 5.825565ms) Jan 31 15:01:00.197: INFO: (14) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 5.617157ms) Jan 31 15:01:00.201: INFO: (14) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 10.448169ms) Jan 31 15:01:00.202: INFO: (14) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 9.790962ms) Jan 31 15:01:00.202: INFO: (14) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 9.761485ms) Jan 31 15:01:00.202: INFO: (14) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 10.023432ms) Jan 31 15:01:00.202: INFO: (14) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 10.087006ms) Jan 31 15:01:00.202: INFO: (14) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 10.210446ms) Jan 31 15:01:00.211: INFO: (15) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 9.014894ms) Jan 31 15:01:00.211: INFO: (15) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 9.404734ms) Jan 31 15:01:00.212: INFO: (15) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 9.253874ms) Jan 31 15:01:00.212: INFO: (15) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 9.872929ms) Jan 31 15:01:00.212: INFO: (15) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 9.833791ms) Jan 31 15:01:00.214: INFO: (15) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 12.184355ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 12.416534ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 12.342719ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 12.40571ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 12.940244ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 12.374572ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 12.421594ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 12.692873ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 12.864774ms) Jan 31 15:01:00.215: INFO: (15) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 12.846883ms) Jan 31 15:01:00.216: INFO: (15) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 13.768204ms) Jan 31 15:01:00.222: INFO: (16) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 5.500977ms) Jan 31 15:01:00.222: INFO: (16) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 6.367128ms) Jan 31 15:01:00.223: INFO: (16) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 6.54293ms) Jan 31 15:01:00.226: INFO: (16) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 9.983428ms) Jan 31 15:01:00.228: INFO: (16) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 12.327695ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 12.649773ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 13.074769ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 13.089693ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 13.061909ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 13.055487ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 13.244542ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 13.234332ms) Jan 31 15:01:00.229: INFO: (16) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 13.264946ms) Jan 31 15:01:00.233: INFO: (16) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 16.989817ms) Jan 31 15:01:00.233: INFO: (16) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 17.06525ms) Jan 31 15:01:00.233: INFO: (16) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 17.340659ms) Jan 31 15:01:00.241: INFO: (17) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 7.602128ms) Jan 31 15:01:00.249: INFO: (17) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 15.692698ms) Jan 31 15:01:00.250: INFO: (17) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 15.446546ms) Jan 31 15:01:00.250: INFO: (17) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 16.175508ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 17.658016ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 17.654492ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 17.576756ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 17.712278ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 17.773386ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 17.827332ms) Jan 31 15:01:00.252: INFO: (17) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 18.784005ms) Jan 31 15:01:00.253: INFO: (17) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 18.545809ms) Jan 31 15:01:00.253: INFO: (17) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 18.830368ms) Jan 31 15:01:00.254: INFO: (17) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 20.456096ms) Jan 31 15:01:00.254: INFO: (17) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 20.569781ms) Jan 31 15:01:00.254: INFO: (17) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 20.303052ms) Jan 31 15:01:00.263: INFO: (18) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 7.935125ms) Jan 31 15:01:00.268: INFO: (18) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 12.83541ms) Jan 31 15:01:00.268: INFO: (18) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 12.823548ms) Jan 31 15:01:00.268: INFO: (18) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 13.086022ms) Jan 31 15:01:00.268: INFO: (18) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 12.849363ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 15.274663ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 15.493705ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 15.72269ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 15.972827ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 16.291708ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 16.630211ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 16.517068ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 16.647009ms) Jan 31 15:01:00.271: INFO: (18) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 16.526874ms) Jan 31 15:01:00.272: INFO: (18) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 16.82247ms) Jan 31 15:01:00.274: INFO: (18) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 18.815169ms) Jan 31 15:01:00.302: INFO: (19) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4/proxy/rewriteme">test</a> (200; 27.956983ms) Jan 31 15:01:00.303: INFO: (19) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:462/proxy/: tls qux (200; 28.663178ms) Jan 31 15:01:00.303: INFO: (19) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:160/proxy/: foo (200; 28.629196ms) Jan 31 15:01:00.303: INFO: (19) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:1080/proxy/rewriteme">t... (200; 28.742946ms) Jan 31 15:01:00.303: INFO: (19) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname1/proxy/: foo (200; 28.895215ms) Jan 31 15:01:00.303: INFO: (19) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:160/proxy/: foo (200; 29.156715ms) Jan 31 15:01:00.304: INFO: (19) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:1080/proxy/rewriteme">test</... (200; 29.193693ms) Jan 31 15:01:00.304: INFO: (19) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname2/proxy/: tls qux (200; 29.702984ms) Jan 31 15:01:00.304: INFO: (19) /api/v1/namespaces/proxy-830/pods/proxy-service-4j24m-44gm4:162/proxy/: bar (200; 30.301739ms) Jan 31 15:01:00.306: INFO: (19) /api/v1/namespaces/proxy-830/pods/http:proxy-service-4j24m-44gm4:162/proxy/: bar (200; 31.096105ms) Jan 31 15:01:00.306: INFO: (19) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname2/proxy/: bar (200; 32.170419ms) Jan 31 15:01:00.307: INFO: (19) /api/v1/namespaces/proxy-830/services/http:proxy-service-4j24m:portname2/proxy/: bar (200; 32.214391ms) Jan 31 15:01:00.307: INFO: (19) /api/v1/namespaces/proxy-830/services/proxy-service-4j24m:portname1/proxy/: foo (200; 32.75212ms) Jan 31 15:01:00.307: INFO: (19) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:460/proxy/: tls baz (200; 32.634561ms) Jan 31 15:01:00.307: INFO: (19) /api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/: <a href="/api/v1/namespaces/proxy-830/pods/https:proxy-service-4j24m-44gm4:443/proxy/tlsrewriteme... (200; 33.182699ms) Jan 31 15:01:00.308: INFO: (19) /api/v1/namespaces/proxy-830/services/https:proxy-service-4j24m:tlsportname1/proxy/: tls baz (200; 33.270252ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-4j24m in namespace proxy-830, will wait for the garbage collector to delete the pods Jan 31 15:01:00.367: INFO: Deleting ReplicationController proxy-service-4j24m took: 5.319906ms Jan 31 15:01:00.467: INFO: Terminating ReplicationController proxy-service-4j24m pods took: 100.241008ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-830" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":34,"skipped":739,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:00:50.974: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-7764 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-7764 I0131 15:00:51.021226 15 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7764, replica count: 2 I0131 15:00:54.072527 15 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 15:00:54.072: INFO: Creating new exec pod Jan 31 15:00:57.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 31 15:00:57.258: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 31 15:00:57.258: INFO: stdout: "" Jan 31 15:00:57.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 10.140.251.179 80' Jan 31 15:00:59.433: INFO: rc: 1 Jan 31 15:00:59.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 10.140.251.179 80: Command stdout: stderr: + nc -zv -t -w 2 10.140.251.179 80 nc: connect to 10.140.251.179 port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:01:00.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 10.140.251.179 80' Jan 31 15:01:02.612: INFO: rc: 1 Jan 31 15:01:02.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 10.140.251.179 80: Command stdout: stderr: + nc -zv -t -w 2 10.140.251.179 80 nc: connect to 10.140.251.179 port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:01:03.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 10.140.251.179 80' Jan 31 15:01:03.616: INFO: stderr: "+ nc -zv -t -w 2 10.140.251.179 80\nConnection to 10.140.251.179 80 port [tcp/http] succeeded!\n" Jan 31 15:01:03.616: INFO: stdout: "" Jan 31 15:01:03.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31708' Jan 31 15:01:03.816: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31708\nConnection to 172.18.0.4 31708 port [tcp/31708] succeeded!\n" Jan 31 15:01:03.816: INFO: stdout: "" Jan 31 15:01:03.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7764 exec execpoddpgwf -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31708' Jan 31 15:01:04.007: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31708\nConnection to 172.18.0.5 31708 port [tcp/31708] succeeded!\n" Jan 31 15:01:04.007: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:04.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7764" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":37,"skipped":649,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:02.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:01:02.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2" in namespace "downward-api-7831" to be "Succeeded or Failed" Jan 31 15:01:02.912: INFO: Pod "downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059953ms Jan 31 15:01:04.916: INFO: Pod "downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005904305s �[1mSTEP�[0m: Saw pod success Jan 31 15:01:04.916: INFO: Pod "downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2" satisfied condition "Succeeded or Failed" Jan 31 15:01:04.919: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:01:04.933: INFO: Waiting for pod downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2 to disappear Jan 31 15:01:04.936: INFO: Pod downwardapi-volume-e32f1bdf-13b7-4d2f-baf8-8780cc37f1c2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:04.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7831" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":740,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:04.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 31 15:01:04.982: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Registering the sample API server. Jan 31 15:01:05.243: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 31 15:01:07.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 15:01:09.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810774065, loc:(*time.Location)(0x771eac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 15:01:12.137: INFO: Waited 818.163135ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:12.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-1554" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":36,"skipped":750,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:12.791: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing all events in all namespaces �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: fetching the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:12.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-9754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":37,"skipped":757,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:12.870: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 31 15:01:12.900: INFO: Created pod &Pod{ObjectMeta:{dns-9246 dns-9246 /api/v1/namespaces/dns-9246/pods/dns-9246 5add1dce-8aa9-435d-95a6-a33f2dfe08f2 9671 0 2023-01-31 15:01:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-31 15:01:12 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ttmgs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ttmgs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ttmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 15:01:12.907: INFO: The status of Pod dns-9246 is Pending, waiting for it to be Running (with Ready = true) Jan 31 15:01:14.910: INFO: The status of Pod dns-9246 is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 31 15:01:14.910: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9246 PodName:dns-9246 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:01:14.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 31 15:01:14.993: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9246 PodName:dns-9246 ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:01:14.993: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:01:15.105: INFO: Deleting pod dns-9246... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:15.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9246" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":38,"skipped":761,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:04.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-1117" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":659,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:27.297: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating Agnhost RC Jan 31 15:01:27.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2341 create -f -' Jan 31 15:01:27.637: INFO: stderr: "" Jan 31 15:01:27.637: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 31 15:01:28.640: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:01:28.640: INFO: Found 0 / 1 Jan 31 15:01:29.641: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:01:29.641: INFO: Found 1 / 1 Jan 31 15:01:29.641: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 31 15:01:29.643: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:01:29.643: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 15:01:29.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2341 patch pod agnhost-primary-mr8jw -p {"metadata":{"annotations":{"x":"y"}}}' Jan 31 15:01:29.744: INFO: stderr: "" Jan 31 15:01:29.744: INFO: stdout: "pod/agnhost-primary-mr8jw patched\n" �[1mSTEP�[0m: checking annotations Jan 31 15:01:29.748: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:01:29.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:29.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2341" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":39,"skipped":679,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:29.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-9c22ae2f-fdcc-43e7-bc87-a5b1d22036cd �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 15:01:29.818: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4" in namespace "projected-650" to be "Succeeded or Failed" Jan 31 15:01:29.821: INFO: Pod "pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904734ms Jan 31 15:01:31.825: INFO: Pod "pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006737006s �[1mSTEP�[0m: Saw pod success Jan 31 15:01:31.825: INFO: Pod "pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4" satisfied condition "Succeeded or Failed" Jan 31 15:01:31.828: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:01:31.844: INFO: Waiting for pod pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4 to disappear Jan 31 15:01:31.846: INFO: Pod pod-projected-configmaps-f5d9d246-0c3c-484e-9130-0abf7916c9b4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:31.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-650" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":697,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:31.921: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-4jj5 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 31 15:01:31.958: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4jj5" in namespace "subpath-1190" to be "Succeeded or Failed" Jan 31 15:01:31.961: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539179ms Jan 31 15:01:33.966: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 2.00707049s Jan 31 15:01:35.969: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 4.010483645s Jan 31 15:01:37.974: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 6.014633285s Jan 31 15:01:39.976: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 8.017574293s Jan 31 15:01:41.980: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 10.021607118s Jan 31 15:01:43.985: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 12.025923349s Jan 31 15:01:45.989: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 14.030345789s Jan 31 15:01:47.994: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 16.035416034s Jan 31 15:01:49.998: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 18.03941437s Jan 31 15:01:52.002: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Running", Reason="", readiness=true. Elapsed: 20.043418057s Jan 31 15:01:54.005: INFO: Pod "pod-subpath-test-configmap-4jj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.046506703s �[1mSTEP�[0m: Saw pod success Jan 31 15:01:54.005: INFO: Pod "pod-subpath-test-configmap-4jj5" satisfied condition "Succeeded or Failed" Jan 31 15:01:54.008: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-subpath-test-configmap-4jj5 container test-container-subpath-configmap-4jj5: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:01:54.023: INFO: Waiting for pod pod-subpath-test-configmap-4jj5 to disappear Jan 31 15:01:54.026: INFO: Pod pod-subpath-test-configmap-4jj5 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-4jj5 Jan 31 15:01:54.026: INFO: Deleting pod "pod-subpath-test-configmap-4jj5" in namespace "subpath-1190" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:01:54.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-1190" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":41,"skipped":749,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:54.120: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:01:54.148: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 31 15:01:57.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 create -f -' Jan 31 15:01:57.573: INFO: stderr: "" Jan 31 15:01:57.573: INFO: stdout: "e2e-test-crd-publish-openapi-8413-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 31 15:01:57.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 delete e2e-test-crd-publish-openapi-8413-crds test-foo' Jan 31 15:01:57.672: INFO: stderr: "" Jan 31 15:01:57.672: INFO: stdout: "e2e-test-crd-publish-openapi-8413-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 31 15:01:57.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 apply -f -' Jan 31 15:01:57.908: INFO: stderr: "" Jan 31 15:01:57.908: INFO: stdout: "e2e-test-crd-publish-openapi-8413-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 31 15:01:57.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 delete e2e-test-crd-publish-openapi-8413-crds test-foo' Jan 31 15:01:58.008: INFO: stderr: "" Jan 31 15:01:58.008: INFO: stdout: "e2e-test-crd-publish-openapi-8413-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 31 15:01:58.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 create -f -' Jan 31 15:01:58.223: INFO: rc: 1 Jan 31 15:01:58.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 apply -f -' Jan 31 15:01:58.430: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 31 15:01:58.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 create -f -' Jan 31 15:01:58.637: INFO: rc: 1 Jan 31 15:01:58.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 --namespace=crd-publish-openapi-453 apply -f -' Jan 31 15:01:58.850: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 31 15:01:58.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 explain e2e-test-crd-publish-openapi-8413-crds' Jan 31 15:01:59.062: INFO: stderr: "" Jan 31 15:01:59.062: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8413-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 31 15:01:59.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 explain e2e-test-crd-publish-openapi-8413-crds.metadata' Jan 31 15:01:59.312: INFO: stderr: "" Jan 31 15:01:59.312: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8413-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 31 15:01:59.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 explain e2e-test-crd-publish-openapi-8413-crds.spec' Jan 31 15:01:59.534: INFO: stderr: "" Jan 31 15:01:59.534: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8413-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 31 15:01:59.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 explain e2e-test-crd-publish-openapi-8413-crds.spec.bars' Jan 31 15:01:59.755: INFO: stderr: "" Jan 31 15:01:59.755: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8413-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 31 15:01:59.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-453 explain e2e-test-crd-publish-openapi-8413-crds.spec.bars2' Jan 31 15:01:59.988: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:02.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-453" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":42,"skipped":808,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:02.969: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:02:03.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1" in namespace "projected-7175" to be "Succeeded or Failed" Jan 31 15:02:03.011: INFO: Pod "downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304887ms Jan 31 15:02:05.015: INFO: Pod "downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008936541s �[1mSTEP�[0m: Saw pod success Jan 31 15:02:05.016: INFO: Pod "downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1" satisfied condition "Succeeded or Failed" Jan 31 15:02:05.021: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:02:05.041: INFO: Waiting for pod downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1 to disappear Jan 31 15:02:05.044: INFO: Pod downwardapi-volume-b7a8e711-807e-40b0-85b8-c02b03a200b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:05.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7175" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":843,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:05.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:05.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-2334" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":44,"skipped":851,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":11,"skipped":255,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:54:46.190: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a replication controller Jan 31 14:54:46.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 create -f -' Jan 31 14:54:46.541: INFO: stderr: "" Jan 31 14:54:46.541: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 31 14:54:46.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:54:46.651: INFO: stderr: "" Jan 31 14:54:46.651: INFO: stdout: "update-demo-nautilus-d2hzg update-demo-nautilus-k5wpw " Jan 31 14:54:46.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-d2hzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:54:46.738: INFO: stderr: "" Jan 31 14:54:46.738: INFO: stdout: "" Jan 31 14:54:46.738: INFO: update-demo-nautilus-d2hzg is created but not running Jan 31 14:54:51.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:54:51.877: INFO: stderr: "" Jan 31 14:54:51.877: INFO: stdout: "update-demo-nautilus-d2hzg update-demo-nautilus-k5wpw " Jan 31 14:54:51.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-d2hzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:54:51.968: INFO: stderr: "" Jan 31 14:54:51.968: INFO: stdout: "true" Jan 31 14:54:51.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-d2hzg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:54:52.063: INFO: stderr: "" Jan 31 14:54:52.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:54:52.063: INFO: validating pod update-demo-nautilus-d2hzg Jan 31 14:54:52.067: INFO: got data: { "image": "nautilus.jpg" } Jan 31 14:54:52.067: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 14:54:52.067: INFO: update-demo-nautilus-d2hzg is verified up and running Jan 31 14:54:52.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-k5wpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:54:52.157: INFO: stderr: "" Jan 31 14:54:52.157: INFO: stdout: "true" Jan 31 14:54:52.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-k5wpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:54:52.246: INFO: stderr: "" Jan 31 14:54:52.246: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:54:52.246: INFO: validating pod update-demo-nautilus-k5wpw Jan 31 14:58:25.447: INFO: update-demo-nautilus-k5wpw is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-k5wpw) Jan 31 14:58:30.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 14:58:30.549: INFO: stderr: "" Jan 31 14:58:30.549: INFO: stdout: "update-demo-nautilus-d2hzg update-demo-nautilus-k5wpw " Jan 31 14:58:30.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-d2hzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:58:30.636: INFO: stderr: "" Jan 31 14:58:30.636: INFO: stdout: "true" Jan 31 14:58:30.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-d2hzg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:58:30.737: INFO: stderr: "" Jan 31 14:58:30.737: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:58:30.737: INFO: validating pod update-demo-nautilus-d2hzg Jan 31 14:58:30.742: INFO: got data: { "image": "nautilus.jpg" } Jan 31 14:58:30.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 14:58:30.742: INFO: update-demo-nautilus-d2hzg is verified up and running Jan 31 14:58:30.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-k5wpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 14:58:30.837: INFO: stderr: "" Jan 31 14:58:30.837: INFO: stdout: "true" Jan 31 14:58:30.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods update-demo-nautilus-k5wpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 14:58:30.926: INFO: stderr: "" Jan 31 14:58:30.926: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 14:58:30.926: INFO: validating pod update-demo-nautilus-k5wpw Jan 31 15:02:04.588: INFO: update-demo-nautilus-k5wpw is running right image but validator function failed: an error on the server ("unknown") has prevented the request from succeeding (get pods update-demo-nautilus-k5wpw) Jan 31 15:02:09.588: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateController(0x5416760, 0xc001845080, 0xc0001359e0, 0x2e, 0x2, 0x4c07034, 0xb, 0x4c1b958, 0x10, 0xc006df0d80, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2205 +0xd56 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 +0x2ad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002db9e00, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: using delete to clean up resources Jan 31 15:02:09.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 delete --grace-period=0 --force -f -' Jan 31 15:02:09.691: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 15:02:09.691: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 15:02:09.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get rc,svc -l name=update-demo --no-headers' Jan 31 15:02:09.828: INFO: stderr: "No resources found in kubectl-9189 namespace.\n" Jan 31 15:02:09.828: INFO: stdout: "" Jan 31 15:02:09.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9189 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 15:02:09.948: INFO: stderr: "" Jan 31 15:02:09.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:09.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9189" for this suite. �[91m�[1m• Failure [443.769 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 15:02:09.589: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2205 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:05.151: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 31 15:02:05.179: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 31 15:02:16.811: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 31 15:02:19.772: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:32.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5547" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":45,"skipped":854,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:32.100: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:32.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1033" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":46,"skipped":867,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:32.189: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:02:32.759: INFO: Checking APIGroup: apiregistration.k8s.io Jan 31 15:02:32.760: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 31 15:02:32.760: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.760: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 31 15:02:32.760: INFO: Checking APIGroup: extensions Jan 31 15:02:32.761: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 31 15:02:32.761: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 31 15:02:32.761: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 31 15:02:32.761: INFO: Checking APIGroup: apps Jan 31 15:02:32.763: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 31 15:02:32.763: INFO: Versions found [{apps/v1 v1}] Jan 31 15:02:32.763: INFO: apps/v1 matches apps/v1 Jan 31 15:02:32.763: INFO: Checking APIGroup: events.k8s.io Jan 31 15:02:32.764: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 31 15:02:32.764: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.764: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 31 15:02:32.764: INFO: Checking APIGroup: authentication.k8s.io Jan 31 15:02:32.765: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 31 15:02:32.765: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.765: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 31 15:02:32.765: INFO: Checking APIGroup: authorization.k8s.io Jan 31 15:02:32.767: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 31 15:02:32.767: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.767: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 31 15:02:32.767: INFO: Checking APIGroup: autoscaling Jan 31 15:02:32.768: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 31 15:02:32.768: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 31 15:02:32.768: INFO: autoscaling/v1 matches autoscaling/v1 Jan 31 15:02:32.768: INFO: Checking APIGroup: batch Jan 31 15:02:32.769: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 31 15:02:32.769: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 31 15:02:32.769: INFO: batch/v1 matches batch/v1 Jan 31 15:02:32.769: INFO: Checking APIGroup: certificates.k8s.io Jan 31 15:02:32.770: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 31 15:02:32.770: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.770: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 31 15:02:32.770: INFO: Checking APIGroup: networking.k8s.io Jan 31 15:02:32.771: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 31 15:02:32.771: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.771: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 31 15:02:32.771: INFO: Checking APIGroup: policy Jan 31 15:02:32.772: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 31 15:02:32.772: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 31 15:02:32.772: INFO: policy/v1beta1 matches policy/v1beta1 Jan 31 15:02:32.772: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 31 15:02:32.773: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 31 15:02:32.773: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.773: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 31 15:02:32.773: INFO: Checking APIGroup: storage.k8s.io Jan 31 15:02:32.774: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 31 15:02:32.774: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.774: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 31 15:02:32.774: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 31 15:02:32.775: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 31 15:02:32.775: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.775: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 31 15:02:32.775: INFO: Checking APIGroup: apiextensions.k8s.io Jan 31 15:02:32.776: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 31 15:02:32.776: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.776: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 31 15:02:32.776: INFO: Checking APIGroup: scheduling.k8s.io Jan 31 15:02:32.778: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 31 15:02:32.778: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.778: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 31 15:02:32.778: INFO: Checking APIGroup: coordination.k8s.io Jan 31 15:02:32.779: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 31 15:02:32.779: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.779: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 31 15:02:32.779: INFO: Checking APIGroup: node.k8s.io Jan 31 15:02:32.780: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Jan 31 15:02:32.780: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.780: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Jan 31 15:02:32.780: INFO: Checking APIGroup: discovery.k8s.io Jan 31 15:02:32.781: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 31 15:02:32.781: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 31 15:02:32.781: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:32.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-9180" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":47,"skipped":871,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":11,"skipped":255,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:09.961: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a replication controller Jan 31 15:02:10.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 create -f -' Jan 31 15:02:10.612: INFO: stderr: "" Jan 31 15:02:10.612: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 31 15:02:10.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:10.731: INFO: stderr: "" Jan 31 15:02:10.732: INFO: stdout: "update-demo-nautilus-6hzpn update-demo-nautilus-g6pbz " Jan 31 15:02:10.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-6hzpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:10.832: INFO: stderr: "" Jan 31 15:02:10.832: INFO: stdout: "" Jan 31 15:02:10.832: INFO: update-demo-nautilus-6hzpn is created but not running Jan 31 15:02:15.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:15.937: INFO: stderr: "" Jan 31 15:02:15.937: INFO: stdout: "update-demo-nautilus-6hzpn update-demo-nautilus-g6pbz " Jan 31 15:02:15.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-6hzpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:16.047: INFO: stderr: "" Jan 31 15:02:16.047: INFO: stdout: "true" Jan 31 15:02:16.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-6hzpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 15:02:16.149: INFO: stderr: "" Jan 31 15:02:16.149: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 15:02:16.149: INFO: validating pod update-demo-nautilus-6hzpn Jan 31 15:02:16.154: INFO: got data: { "image": "nautilus.jpg" } Jan 31 15:02:16.154: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 15:02:16.154: INFO: update-demo-nautilus-6hzpn is verified up and running Jan 31 15:02:16.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:16.254: INFO: stderr: "" Jan 31 15:02:16.254: INFO: stdout: "true" Jan 31 15:02:16.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 15:02:16.348: INFO: stderr: "" Jan 31 15:02:16.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 15:02:16.348: INFO: validating pod update-demo-nautilus-g6pbz Jan 31 15:02:16.353: INFO: got data: { "image": "nautilus.jpg" } Jan 31 15:02:16.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 15:02:16.353: INFO: update-demo-nautilus-g6pbz is verified up and running �[1mSTEP�[0m: scaling down the replication controller Jan 31 15:02:16.356: INFO: scanned /root for discovery docs: <nil> Jan 31 15:02:16.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 31 15:02:17.480: INFO: stderr: "" Jan 31 15:02:17.481: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 31 15:02:17.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:17.578: INFO: stderr: "" Jan 31 15:02:17.578: INFO: stdout: "update-demo-nautilus-6hzpn update-demo-nautilus-g6pbz " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 31 15:02:22.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:22.727: INFO: stderr: "" Jan 31 15:02:22.727: INFO: stdout: "update-demo-nautilus-6hzpn update-demo-nautilus-g6pbz " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 31 15:02:27.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:27.841: INFO: stderr: "" Jan 31 15:02:27.841: INFO: stdout: "update-demo-nautilus-6hzpn update-demo-nautilus-g6pbz " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 31 15:02:32.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:32.940: INFO: stderr: "" Jan 31 15:02:32.940: INFO: stdout: "update-demo-nautilus-g6pbz " Jan 31 15:02:32.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:33.034: INFO: stderr: "" Jan 31 15:02:33.034: INFO: stdout: "true" Jan 31 15:02:33.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 15:02:33.134: INFO: stderr: "" Jan 31 15:02:33.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 15:02:33.134: INFO: validating pod update-demo-nautilus-g6pbz Jan 31 15:02:33.137: INFO: got data: { "image": "nautilus.jpg" } Jan 31 15:02:33.137: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 15:02:33.138: INFO: update-demo-nautilus-g6pbz is verified up and running �[1mSTEP�[0m: scaling up the replication controller Jan 31 15:02:33.139: INFO: scanned /root for discovery docs: <nil> Jan 31 15:02:33.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 31 15:02:34.262: INFO: stderr: "" Jan 31 15:02:34.262: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 31 15:02:34.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 15:02:34.362: INFO: stderr: "" Jan 31 15:02:34.362: INFO: stdout: "update-demo-nautilus-g6pbz update-demo-nautilus-mpxh9 " Jan 31 15:02:34.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:34.468: INFO: stderr: "" Jan 31 15:02:34.468: INFO: stdout: "true" Jan 31 15:02:34.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-g6pbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 15:02:34.566: INFO: stderr: "" Jan 31 15:02:34.566: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 15:02:34.566: INFO: validating pod update-demo-nautilus-g6pbz Jan 31 15:02:34.570: INFO: got data: { "image": "nautilus.jpg" } Jan 31 15:02:34.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 15:02:34.570: INFO: update-demo-nautilus-g6pbz is verified up and running Jan 31 15:02:34.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-mpxh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 15:02:34.672: INFO: stderr: "" Jan 31 15:02:34.672: INFO: stdout: "true" Jan 31 15:02:34.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods update-demo-nautilus-mpxh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 15:02:34.785: INFO: stderr: "" Jan 31 15:02:34.785: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 15:02:34.785: INFO: validating pod update-demo-nautilus-mpxh9 Jan 31 15:02:34.791: INFO: got data: { "image": "nautilus.jpg" } Jan 31 15:02:34.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 15:02:34.791: INFO: update-demo-nautilus-mpxh9 is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 31 15:02:34.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 delete --grace-period=0 --force -f -' Jan 31 15:02:34.899: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 15:02:34.899: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 15:02:34.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get rc,svc -l name=update-demo --no-headers' Jan 31 15:02:35.026: INFO: stderr: "No resources found in kubectl-4084 namespace.\n" Jan 31 15:02:35.027: INFO: stdout: "" Jan 31 15:02:35.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 15:02:35.135: INFO: stderr: "" Jan 31 15:02:35.135: INFO: stdout: "update-demo-nautilus-g6pbz\nupdate-demo-nautilus-mpxh9\n" Jan 31 15:02:35.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get rc,svc -l name=update-demo --no-headers' Jan 31 15:02:35.752: INFO: stderr: "No resources found in kubectl-4084 namespace.\n" Jan 31 15:02:35.752: INFO: stdout: "" Jan 31 15:02:35.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4084 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 15:02:35.871: INFO: stderr: "" Jan 31 15:02:35.871: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:35.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4084" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":12,"skipped":255,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:35.898: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: create deployment with httpd image Jan 31 15:02:35.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2575 create -f -' Jan 31 15:02:36.306: INFO: stderr: "" Jan 31 15:02:36.306: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Jan 31 15:02:36.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2575 diff -f -' Jan 31 15:02:36.739: INFO: rc: 1 Jan 31 15:02:36.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2575 delete -f -' Jan 31 15:02:36.848: INFO: stderr: "" Jan 31 15:02:36.848: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:36.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2575" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":13,"skipped":266,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:36.884: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 31 15:02:40.960: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 15:02:40.963: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 15:02:42.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 15:02:42.967: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 15:02:44.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 15:02:44.967: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:44.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-4866" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":282,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:01:15.144: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-6692 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a new StatefulSet Jan 31 15:01:15.193: INFO: Found 0 stateful pods, waiting for 3 Jan 31 15:01:25.198: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:01:25.198: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:01:25.198: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 31 15:01:25.227: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 31 15:01:35.262: INFO: Updating stateful set ss2 Jan 31 15:01:35.272: INFO: Waiting for Pod statefulset-6692/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 31 15:01:45.318: INFO: Found 2 stateful pods, waiting for 3 Jan 31 15:01:55.323: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:01:55.323: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 15:01:55.323: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 31 15:01:55.348: INFO: Updating stateful set ss2 Jan 31 15:01:55.359: INFO: Waiting for Pod statefulset-6692/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 15:02:05.385: INFO: Updating stateful set ss2 Jan 31 15:02:05.394: INFO: Waiting for StatefulSet statefulset-6692/ss2 to complete update Jan 31 15:02:05.394: INFO: Waiting for Pod statefulset-6692/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 15:02:15.401: INFO: Waiting for StatefulSet statefulset-6692/ss2 to complete update Jan 31 15:02:15.401: INFO: Waiting for Pod statefulset-6692/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 15:02:25.402: INFO: Deleting all statefulset in ns statefulset-6692 Jan 31 15:02:25.404: INFO: Scaling statefulset ss2 to 0 Jan 31 15:02:55.419: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 15:02:55.422: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:02:55.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6692" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":39,"skipped":767,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:55.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:01.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-5120" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":40,"skipped":774,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:01.503: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:03:01.536: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-234a8ac6-1b97-4115-925d-499af717e79c" in namespace "security-context-test-5571" to be "Succeeded or Failed" Jan 31 15:03:01.538: INFO: Pod "busybox-readonly-false-234a8ac6-1b97-4115-925d-499af717e79c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.963592ms Jan 31 15:03:03.542: INFO: Pod "busybox-readonly-false-234a8ac6-1b97-4115-925d-499af717e79c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005898192s Jan 31 15:03:03.542: INFO: Pod "busybox-readonly-false-234a8ac6-1b97-4115-925d-499af717e79c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:03.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-5571" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":777,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:03.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Jan 31 15:03:03.637: INFO: Waiting up to 5m0s for pod "pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c" in namespace "emptydir-7797" to be "Succeeded or Failed" Jan 31 15:03:03.644: INFO: Pod "pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640037ms Jan 31 15:03:05.647: INFO: Pod "pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010265041s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:05.648: INFO: Pod "pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c" satisfied condition "Succeeded or Failed" Jan 31 15:03:05.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:05.673: INFO: Waiting for pod pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c to disappear Jan 31 15:03:05.676: INFO: Pod pod-7f2fd6d3-0cdd-40d1-8641-ed3c2eadbf8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:05.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7797" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":814,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:05.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:05.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9222" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":43,"skipped":828,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:05.772: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 31 15:03:05.804: INFO: Waiting up to 5m0s for pod "downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d" in namespace "downward-api-7375" to be "Succeeded or Failed" Jan 31 15:03:05.808: INFO: Pod "downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11182ms Jan 31 15:03:07.812: INFO: Pod "downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008331626s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:07.813: INFO: Pod "downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d" satisfied condition "Succeeded or Failed" Jan 31 15:03:07.815: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-6xi31i pod downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:07.837: INFO: Waiting for pod downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d to disappear Jan 31 15:03:07.839: INFO: Pod downward-api-8d3de60a-1119-464f-a699-cb931ad28e6d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:07.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7375" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":833,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:07.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 15:03:08.271: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 15:03:11.298: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:03:11.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7526-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:12.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9145" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9145-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":45,"skipped":840,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:12.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 31 15:03:12.626: INFO: Waiting up to 5m0s for pod "pod-1a74fda5-3ba9-499c-a075-f7b7c170f807" in namespace "emptydir-2367" to be "Succeeded or Failed" Jan 31 15:03:12.630: INFO: Pod "pod-1a74fda5-3ba9-499c-a075-f7b7c170f807": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14536ms Jan 31 15:03:14.634: INFO: Pod "pod-1a74fda5-3ba9-499c-a075-f7b7c170f807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008257837s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:14.634: INFO: Pod "pod-1a74fda5-3ba9-499c-a075-f7b7c170f807" satisfied condition "Succeeded or Failed" Jan 31 15:03:14.637: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-1a74fda5-3ba9-499c-a075-f7b7c170f807 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:14.651: INFO: Waiting for pod pod-1a74fda5-3ba9-499c-a075-f7b7c170f807 to disappear Jan 31 15:03:14.654: INFO: Pod pod-1a74fda5-3ba9-499c-a075-f7b7c170f807 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:14.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2367" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":860,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:14.707: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:03:14.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c" in namespace "downward-api-4341" to be "Succeeded or Failed" Jan 31 15:03:14.745: INFO: Pod "downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081092ms Jan 31 15:03:16.749: INFO: Pod "downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006111568s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:16.749: INFO: Pod "downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c" satisfied condition "Succeeded or Failed" Jan 31 15:03:16.752: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-md-0-9ckrk-5f5b498969-8lt9h pod downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:16.767: INFO: Waiting for pod downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c to disappear Jan 31 15:03:16.770: INFO: Pod downwardapi-volume-fa94d31e-07b2-485a-832f-314713a4283c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:16.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4341" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":890,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:16.797: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:03:16.827: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 31 15:03:17.855: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:18.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-5634" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":48,"skipped":904,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:18.908: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 31 15:03:18.946: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7297 /api/v1/namespaces/watch-7297/configmaps/e2e-watch-test-watch-closed 91f92902-05e9-4701-878f-8f65cf8f9388 11197 0 2023-01-31 15:03:18 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-31 15:03:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 15:03:18.946: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7297 /api/v1/namespaces/watch-7297/configmaps/e2e-watch-test-watch-closed 91f92902-05e9-4701-878f-8f65cf8f9388 11198 0 2023-01-31 15:03:18 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-31 15:03:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 31 15:03:18.958: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7297 /api/v1/namespaces/watch-7297/configmaps/e2e-watch-test-watch-closed 91f92902-05e9-4701-878f-8f65cf8f9388 11199 0 2023-01-31 15:03:18 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-31 15:03:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 15:03:18.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7297 /api/v1/namespaces/watch-7297/configmaps/e2e-watch-test-watch-closed 91f92902-05e9-4701-878f-8f65cf8f9388 11200 0 2023-01-31 15:03:18 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-31 15:03:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:18.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7297" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":49,"skipped":937,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:18.975: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-8d151de1-0011-4b38-b2ac-47a65edefccc �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 15:03:19.012: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff" in namespace "projected-4204" to be "Succeeded or Failed" Jan 31 15:03:19.015: INFO: Pod "pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558224ms Jan 31 15:03:21.018: INFO: Pod "pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005732909s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:21.018: INFO: Pod "pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff" satisfied condition "Succeeded or Failed" Jan 31 15:03:21.021: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:21.038: INFO: Waiting for pod pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff to disappear Jan 31 15:03:21.040: INFO: Pod pod-projected-configmaps-e6cda619-393f-40af-bffc-5b50fb8fb8ff no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:21.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4204" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":943,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:32.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:32.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1390" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:21.059: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-lshv �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 31 15:03:21.095: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lshv" in namespace "subpath-9960" to be "Succeeded or Failed" Jan 31 15:03:21.098: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087875ms Jan 31 15:03:23.102: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 2.006251323s Jan 31 15:03:25.106: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 4.010002377s Jan 31 15:03:27.109: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 6.013098617s Jan 31 15:03:29.113: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 8.016787049s Jan 31 15:03:31.117: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 10.021236899s Jan 31 15:03:33.121: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 12.024874471s Jan 31 15:03:35.128: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 14.031926538s Jan 31 15:03:37.132: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 16.036012841s Jan 31 15:03:39.137: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 18.040554659s Jan 31 15:03:41.141: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Running", Reason="", readiness=true. Elapsed: 20.044567309s Jan 31 15:03:43.144: INFO: Pod "pod-subpath-test-projected-lshv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047659265s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:43.144: INFO: Pod "pod-subpath-test-projected-lshv" satisfied condition "Succeeded or Failed" Jan 31 15:03:43.147: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-subpath-test-projected-lshv container test-container-subpath-projected-lshv: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:43.162: INFO: Waiting for pod pod-subpath-test-projected-lshv to disappear Jan 31 15:03:43.164: INFO: Pod pod-subpath-test-projected-lshv no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-lshv Jan 31 15:03:43.164: INFO: Deleting pod "pod-subpath-test-projected-lshv" in namespace "subpath-9960" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:43.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-9960" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":51,"skipped":951,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:43.203: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:03:43.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4" in namespace "downward-api-3360" to be "Succeeded or Failed" Jan 31 15:03:43.235: INFO: Pod "downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240863ms Jan 31 15:03:45.239: INFO: Pod "downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005911491s �[1mSTEP�[0m: Saw pod success Jan 31 15:03:45.239: INFO: Pod "downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4" satisfied condition "Succeeded or Failed" Jan 31 15:03:45.242: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:03:45.258: INFO: Waiting for pod downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4 to disappear Jan 31 15:03:45.261: INFO: Pod downwardapi-volume-14151629-045b-4c8e-8c0a-a95816ef2ca4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:45.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3360" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":966,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:45.270: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:47.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-473" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":966,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":882,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:32.854: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating service in namespace services-3274 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-3274 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-3274 I0131 15:03:32.897806 15 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3274, replica count: 3 I0131 15:03:35.948341 15 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 15:03:35.953: INFO: Creating new exec pod Jan 31 15:03:38.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3274 exec execpod-affinitys55l8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 31 15:03:39.136: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 31 15:03:39.136: INFO: stdout: "" Jan 31 15:03:39.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3274 exec execpod-affinitys55l8 -- /bin/sh -x -c nc -zv -t -w 2 10.133.34.119 80' Jan 31 15:03:39.320: INFO: stderr: "+ nc -zv -t -w 2 10.133.34.119 80\nConnection to 10.133.34.119 80 port [tcp/http] succeeded!\n" Jan 31 15:03:39.320: INFO: stdout: "" Jan 31 15:03:39.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3274 exec execpod-affinitys55l8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.133.34.119:80/ ; done' Jan 31 15:03:39.607: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.34.119:80/\n" Jan 31 15:03:39.608: INFO: stdout: "\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv\naffinity-clusterip-vvrbv" Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Received response from host: affinity-clusterip-vvrbv Jan 31 15:03:39.608: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-3274, will wait for the garbage collector to delete the pods Jan 31 15:03:39.680: INFO: Deleting ReplicationController affinity-clusterip took: 5.580878ms Jan 31 15:03:40.081: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.282054ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:54.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3274" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":49,"skipped":882,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:54.767: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 31 15:03:56.830: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:03:56.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-8573" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":902,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:56.864: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 31 15:03:57.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 31 15:04:00.582: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:00.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8331" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8331-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":51,"skipped":908,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:03:47.325: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 31 15:03:49.364: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2698 PodName:var-expansion-44bf279c-98d2-465c-babd-a841652160e0 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:03:49.364: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: test for file in mounted path Jan 31 15:03:49.445: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2698 PodName:var-expansion-44bf279c-98d2-465c-babd-a841652160e0 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 31 15:03:49.445: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: updating the annotation value Jan 31 15:03:50.031: INFO: Successfully updated pod "var-expansion-44bf279c-98d2-465c-babd-a841652160e0" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 31 15:03:50.035: INFO: Deleting pod "var-expansion-44bf279c-98d2-465c-babd-a841652160e0" in namespace "var-expansion-2698" Jan 31 15:03:50.039: INFO: Wait up to 5m0s for pod "var-expansion-44bf279c-98d2-465c-babd-a841652160e0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:24.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2698" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":54,"skipped":970,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:24.063: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name secret-test-14a6b656-e34f-4fb5-8ac2-607b94083972 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 15:04:24.098: INFO: Waiting up to 5m0s for pod "pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612" in namespace "secrets-9173" to be "Succeeded or Failed" Jan 31 15:04:24.100: INFO: Pod "pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145316ms Jan 31 15:04:26.104: INFO: Pod "pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006164893s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:26.104: INFO: Pod "pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612" satisfied condition "Succeeded or Failed" Jan 31 15:04:26.107: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:26.121: INFO: Waiting for pod pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612 to disappear Jan 31 15:04:26.125: INFO: Pod pod-secrets-c21e3673-e0a7-4d26-9a5e-93267f575612 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:26.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9173" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":975,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:26.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name projected-secret-test-b15e21f6-e7ee-4584-a02d-7c370fce10ec �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 15:04:26.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c" in namespace "projected-3725" to be "Succeeded or Failed" Jan 31 15:04:26.174: INFO: Pod "pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.559408ms Jan 31 15:04:28.179: INFO: Pod "pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007287495s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:28.179: INFO: Pod "pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c" satisfied condition "Succeeded or Failed" Jan 31 15:04:28.187: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:28.204: INFO: Waiting for pod pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c to disappear Jan 31 15:04:28.206: INFO: Pod pod-projected-secrets-a9d67c2a-6b06-40d4-96a7-5f3cc69d595c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:28.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3725" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":975,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:28.225: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating pod Jan 31 15:04:30.279: INFO: Pod pod-hostip-b96c7956-58c4-42a7-9a9d-567ea6c68a4c has hostIP: 172.18.0.6 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:30.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8266" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":980,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:30.292: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1307 �[1mSTEP�[0m: creating the pod Jan 31 15:04:30.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 create -f -' Jan 31 15:04:30.619: INFO: stderr: "" Jan 31 15:04:30.619: INFO: stdout: "pod/pause created\n" Jan 31 15:04:30.619: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 31 15:04:30.619: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-606" to be "running and ready" Jan 31 15:04:30.622: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161543ms Jan 31 15:04:32.626: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006912572s Jan 31 15:04:32.626: INFO: Pod "pause" satisfied condition "running and ready" Jan 31 15:04:32.626: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Jan 31 15:04:32.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 label pods pause testing-label=testing-label-value' Jan 31 15:04:32.726: INFO: stderr: "" Jan 31 15:04:32.726: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Jan 31 15:04:32.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 get pod pause -L testing-label' Jan 31 15:04:32.814: INFO: stderr: "" Jan 31 15:04:32.814: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Jan 31 15:04:32.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 label pods pause testing-label-' Jan 31 15:04:32.911: INFO: stderr: "" Jan 31 15:04:32.911: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Jan 31 15:04:32.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 get pod pause -L testing-label' Jan 31 15:04:33.000: INFO: stderr: "" Jan 31 15:04:33.000: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 �[1mSTEP�[0m: using delete to clean up resources Jan 31 15:04:33.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 delete --grace-period=0 --force -f -' Jan 31 15:04:33.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 15:04:33.103: INFO: stdout: "pod \"pause\" force deleted\n" Jan 31 15:04:33.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 get rc,svc -l name=pause --no-headers' Jan 31 15:04:33.208: INFO: stderr: "No resources found in kubectl-606 namespace.\n" Jan 31 15:04:33.208: INFO: stdout: "" Jan 31 15:04:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-606 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 15:04:33.336: INFO: stderr: "" Jan 31 15:04:33.336: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:33.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-606" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":58,"skipped":982,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:33.359: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-6b003265-2b3c-4d12-896a-d7f0945b199b �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 31 15:04:33.396: INFO: Waiting up to 5m0s for pod "pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de" in namespace "configmap-2489" to be "Succeeded or Failed" Jan 31 15:04:33.401: INFO: Pod "pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400032ms Jan 31 15:04:35.406: INFO: Pod "pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010109728s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:35.406: INFO: Pod "pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de" satisfied condition "Succeeded or Failed" Jan 31 15:04:35.409: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:35.429: INFO: Waiting for pod pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de to disappear Jan 31 15:04:35.433: INFO: Pod pod-configmaps-6fbca9a6-e8b9-44c0-a77d-e81507c359de no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:35.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2489" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":991,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:35.457: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 31 15:04:35.490: INFO: Waiting up to 5m0s for pod "pod-6542f397-3dae-45d8-b23a-5439d1095b3a" in namespace "emptydir-8965" to be "Succeeded or Failed" Jan 31 15:04:35.492: INFO: Pod "pod-6542f397-3dae-45d8-b23a-5439d1095b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809861ms Jan 31 15:04:37.496: INFO: Pod "pod-6542f397-3dae-45d8-b23a-5439d1095b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006318011s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:37.496: INFO: Pod "pod-6542f397-3dae-45d8-b23a-5439d1095b3a" satisfied condition "Succeeded or Failed" Jan 31 15:04:37.499: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-6542f397-3dae-45d8-b23a-5439d1095b3a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:37.513: INFO: Waiting for pod pod-6542f397-3dae-45d8-b23a-5439d1095b3a to disappear Jan 31 15:04:37.516: INFO: Pod pod-6542f397-3dae-45d8-b23a-5439d1095b3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:37.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8965" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1000,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:37.628: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Create set of pods Jan 31 15:04:37.659: INFO: created test-pod-1 Jan 31 15:04:37.665: INFO: created test-pod-2 Jan 31 15:04:37.670: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4547" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":61,"skipped":1083,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:37.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:04:37.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 create -f -' Jan 31 15:04:37.994: INFO: stderr: "" Jan 31 15:04:37.994: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 31 15:04:37.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 create -f -' Jan 31 15:04:38.244: INFO: stderr: "" Jan 31 15:04:38.244: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 31 15:04:39.247: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:04:39.247: INFO: Found 0 / 1 Jan 31 15:04:40.247: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:04:40.248: INFO: Found 1 / 1 Jan 31 15:04:40.248: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 31 15:04:40.250: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 15:04:40.250: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 15:04:40.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 describe pod agnhost-primary-qchqr' Jan 31 15:04:40.357: INFO: stderr: "" Jan 31 15:04:40.357: INFO: stdout: "Name: agnhost-primary-qchqr\nNamespace: kubectl-655\nPriority: 0\nNode: k8s-upgrade-and-conformance-d8uk6o-worker-z043bi/172.18.0.6\nStart Time: Tue, 31 Jan 2023 15:04:38 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.107\nIPs:\n IP: 192.168.6.107\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://cfce4fa7ad0afb077d3a80878771e2987e85d5f123749e320de4e4cb407d5e0f\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 31 Jan 2023 15:04:38 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-g8l2n (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-g8l2n:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-g8l2n\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-655/agnhost-primary-qchqr to k8s-upgrade-and-conformance-d8uk6o-worker-z043bi\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Jan 31 15:04:40.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 describe rc agnhost-primary' Jan 31 15:04:40.483: INFO: stderr: "" Jan 31 15:04:40.483: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-655\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-qchqr\n" Jan 31 15:04:40.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 describe service agnhost-primary' Jan 31 15:04:40.612: INFO: stderr: "" Jan 31 15:04:40.612: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-655\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.136.19.111\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.6.107:6379\nSession Affinity: None\nEvents: <none>\n" Jan 31 15:04:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 describe node k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm' Jan 31 15:04:40.754: INFO: stderr: "" Jan 31 15:04:40.754: INFO: stdout: "Name: k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-d8uk6o\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-nit25p\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-d8uk6o-8ck6x\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 31 Jan 2023 14:37:12 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\n AcquireTime: <unset>\n RenewTime: Tue, 31 Jan 2023 15:04:34 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 31 Jan 2023 15:03:01 +0000 Tue, 31 Jan 2023 14:37:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 31 Jan 2023 15:03:01 +0000 Tue, 31 Jan 2023 14:37:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 31 Jan 2023 15:03:01 +0000 Tue, 31 Jan 2023 14:37:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 31 Jan 2023 15:03:01 +0000 Tue, 31 Jan 2023 14:37:56 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: b1493756170c4918b184536038eeadf4\n System UUID: 4a8002b4-0758-49ad-9e7b-06e87bd86aa3\n Boot ID: 4e99b044-1fe8-4e56-b292-50b4d76d801d\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 22.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9\n Kubelet Version: v1.19.16\n Kube-Proxy Version: v1.19.16\nPodCIDR: 192.168.5.0/24\nPodCIDRs: 192.168.5.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27m\n kube-system kindnet-dkf6f 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 27m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm 250m (3%) 0 (0%) 0 (0%) 0 (0%) 27m\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm 200m (2%) 0 (0%) 0 (0%) 0 (0%) 27m\n kube-system kube-proxy-p9m2s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m\n kube-system kube-scheduler-k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm 100m (1%) 0 (0%) 0 (0%) 0 (0%) 27m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (8%) 100m (1%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 27m kubelet Starting kubelet.\n Warning InvalidDiskCapacity 27m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 27m (x2 over 27m) kubelet Node k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 27m (x2 over 27m) kubelet Node k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 27m (x2 over 27m) kubelet Node k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm status is now: NodeHasSufficientPID\n Warning CheckLimitsForResolvConf 27m kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Normal NodeAllocatableEnforced 27m kubelet Updated Node Allocatable limit across pods\n Normal Starting 27m kube-proxy Starting kube-proxy.\n Normal NodeReady 26m kubelet Node k8s-upgrade-and-conformance-d8uk6o-8ck6x-977mm status is now: NodeReady\n Normal Starting 22m kube-proxy Starting kube-proxy.\n" Jan 31 15:04:40.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-655 describe namespace kubectl-655' Jan 31 15:04:40.865: INFO: stderr: "" Jan 31 15:04:40.865: INFO: stdout: "Name: kubectl-655\nLabels: e2e-framework=kubectl\n e2e-run=49e5644d-916b-43b2-ad8a-2a8e3131aca9\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:40.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-655" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":62,"skipped":1083,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:40.945: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jan 31 15:04:40.969: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:47.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3357" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":63,"skipped":1138,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:47.181: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating secret with name secret-test-map-f18b5196-6bc2-46b9-8890-f93dd873dff7 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 31 15:04:47.281: INFO: Waiting up to 5m0s for pod "pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2" in namespace "secrets-5990" to be "Succeeded or Failed" Jan 31 15:04:47.284: INFO: Pod "pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866264ms Jan 31 15:04:49.288: INFO: Pod "pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006636566s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:49.288: INFO: Pod "pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2" satisfied condition "Succeeded or Failed" Jan 31 15:04:49.291: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:49.307: INFO: Waiting for pod pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2 to disappear Jan 31 15:04:49.310: INFO: Pod pod-secrets-8316f30b-6951-4ab2-98e7-7e78e33876b2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:49.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5990" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1149,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:49.342: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: retrieving the pod Jan 31 15:04:51.394: INFO: &Pod{ObjectMeta:{send-events-20a70586-fa1b-4ce7-9a09-f7363fe0c94b events-3744 /api/v1/namespaces/events-3744/pods/send-events-20a70586-fa1b-4ce7-9a09-f7363fe0c94b 453203c3-e451-4223-a702-1ae5e450a911 12325 0 2023-01-31 15:04:49 +0000 UTC <nil> <nil> map[name:foo time:369434927] map[] [] [] [{e2e.test Update v1 2023-01-31 15:04:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-31 15:04:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nftb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nftb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nftb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-d8uk6o-worker-z043bi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 15:04:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 15:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 15:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-31 15:04:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.109,StartTime:2023-01-31 15:04:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-31 15:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://93147b5be99100a67cf466e1b3ace35458348a730a1071e4c2fd75fedc4bf120,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} �[1mSTEP�[0m: checking for scheduler event about the pod Jan 31 15:04:53.398: INFO: Saw scheduler event for our pod. �[1mSTEP�[0m: checking for kubelet event about the pod Jan 31 15:04:55.402: INFO: Saw kubelet event for our pod. �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:55.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-3744" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":65,"skipped":1162,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:55.418: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 31 15:04:55.449: INFO: Waiting up to 5m0s for pod "pod-3dfe8462-02fa-4625-8c26-f62968f8e309" in namespace "emptydir-6860" to be "Succeeded or Failed" Jan 31 15:04:55.451: INFO: Pod "pod-3dfe8462-02fa-4625-8c26-f62968f8e309": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566285ms Jan 31 15:04:57.455: INFO: Pod "pod-3dfe8462-02fa-4625-8c26-f62968f8e309": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006331947s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:57.455: INFO: Pod "pod-3dfe8462-02fa-4625-8c26-f62968f8e309" satisfied condition "Succeeded or Failed" Jan 31 15:04:57.458: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod pod-3dfe8462-02fa-4625-8c26-f62968f8e309 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:57.475: INFO: Waiting for pod pod-3dfe8462-02fa-4625-8c26-f62968f8e309 to disappear Jan 31 15:04:57.479: INFO: Pod pod-3dfe8462-02fa-4625-8c26-f62968f8e309 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:57.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6860" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1162,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:04:57.507: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 31 15:04:57.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c" in namespace "downward-api-3932" to be "Succeeded or Failed" Jan 31 15:04:57.539: INFO: Pod "downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490045ms Jan 31 15:04:59.543: INFO: Pod "downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00646937s �[1mSTEP�[0m: Saw pod success Jan 31 15:04:59.543: INFO: Pod "downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c" satisfied condition "Succeeded or Failed" Jan 31 15:04:59.546: INFO: Trying to get logs from node k8s-upgrade-and-conformance-d8uk6o-worker-z043bi pod downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 31 15:04:59.560: INFO: Waiting for pod downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c to disappear Jan 31 15:04:59.563: INFO: Pod downwardapi-volume-44ce80d7-67a1-4f99-92d9-778dce26207c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:04:59.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1176,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:02:44.985: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating service in namespace services-9325 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-9325 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-9325 I0131 15:02:45.031680 23 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9325, replica count: 3 I0131 15:02:48.082142 23 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 15:02:48.088: INFO: Creating new exec pod Jan 31 15:02:51.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:02:53.268: INFO: rc: 1 Jan 31 15:02:53.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:02:54.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:02:56.432: INFO: rc: 1 Jan 31 15:02:56.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:02:57.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:02:59.440: INFO: rc: 1 Jan 31 15:02:59.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:00.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:02.440: INFO: rc: 1 Jan 31 15:03:02.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:05.453: INFO: rc: 1 Jan 31 15:03:05.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:06.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:08.453: INFO: rc: 1 Jan 31 15:03:08.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:09.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:11.440: INFO: rc: 1 Jan 31 15:03:11.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:12.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:14.445: INFO: rc: 1 Jan 31 15:03:14.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:17.469: INFO: rc: 1 Jan 31 15:03:17.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:18.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:20.453: INFO: rc: 1 Jan 31 15:03:20.453: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:21.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:23.454: INFO: rc: 1 Jan 31 15:03:23.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:24.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:26.442: INFO: rc: 1 Jan 31 15:03:26.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:27.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:29.443: INFO: rc: 1 Jan 31 15:03:29.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:30.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:32.427: INFO: rc: 1 Jan 31 15:03:32.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:33.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:35.515: INFO: rc: 1 Jan 31 15:03:35.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:36.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:38.447: INFO: rc: 1 Jan 31 15:03:38.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:39.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:41.518: INFO: rc: 1 Jan 31 15:03:41.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:44.461: INFO: rc: 1 Jan 31 15:03:44.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:45.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:47.450: INFO: rc: 1 Jan 31 15:03:47.450: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:48.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:50.470: INFO: rc: 1 Jan 31 15:03:50.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:51.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:53.444: INFO: rc: 1 Jan 31 15:03:53.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:54.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:56.441: INFO: rc: 1 Jan 31 15:03:56.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:03:57.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:03:59.441: INFO: rc: 1 Jan 31 15:03:59.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:00.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:02.444: INFO: rc: 1 Jan 31 15:04:02.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:05.492: INFO: rc: 1 Jan 31 15:04:05.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:06.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:08.459: INFO: rc: 1 Jan 31 15:04:08.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:09.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:11.454: INFO: rc: 1 Jan 31 15:04:11.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:12.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:14.492: INFO: rc: 1 Jan 31 15:04:14.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:17.441: INFO: rc: 1 Jan 31 15:04:17.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:18.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:20.437: INFO: rc: 1 Jan 31 15:04:20.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:21.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:23.439: INFO: rc: 1 Jan 31 15:04:23.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:24.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:26.460: INFO: rc: 1 Jan 31 15:04:26.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:27.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:29.446: INFO: rc: 1 Jan 31 15:04:29.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:30.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:32.478: INFO: rc: 1 Jan 31 15:04:32.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:33.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:35.456: INFO: rc: 1 Jan 31 15:04:35.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:36.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:38.447: INFO: rc: 1 Jan 31 15:04:38.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:39.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:41.438: INFO: rc: 1 Jan 31 15:04:41.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:42.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:44.444: INFO: rc: 1 Jan 31 15:04:44.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:45.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:47.450: INFO: rc: 1 Jan 31 15:04:47.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:48.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:50.443: INFO: rc: 1 Jan 31 15:04:50.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:51.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:53.454: INFO: rc: 1 Jan 31 15:04:53.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:53.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:04:55.634: INFO: rc: 1 Jan 31 15:04:55.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9325 exec execpod-affinityrmb46 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 31 15:04:55.634: FAIL: Unexpected error: <*errors.errorString | 0xc000b32290>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000b014a0, 0x5416760, 0xc001d70840, 0xc000b51440, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 +0x62e k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3466 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002db9e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002db9e00, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 31 15:04:55.635: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-9325, will wait for the garbage collector to delete the pods Jan 31 15:04:55.721: INFO: Deleting ReplicationController affinity-clusterip-transition took: 10.585884ms Jan 31 15:04:56.121: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.254646ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:05:08.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9325" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 �[91m�[1m• Failure [143.762 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 15:04:55.634: Unexpected error: <*errors.errorString | 0xc000b32290>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3511 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":0,"skipped":11,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 14:55:07.022: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-494.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-494.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-494.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 50.9.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.9.50_udp@PTR;check="$$(dig +tcp +noall +answer +search 50.9.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.9.50_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-494.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-494.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-494.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-494.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 50.9.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.9.50_udp@PTR;check="$$(dig +tcp +noall +answer +search 50.9.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.9.50_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 31 14:55:09.111: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.114: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.117: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.120: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.123: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.128: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.130: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.135: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.138: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.141: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.145: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.150: INFO: Unable to read jessie_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.159: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.164: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.172: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.177: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.181: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.184: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.191: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:09.191: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:14.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.202: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.205: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.208: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.211: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.214: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.219: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.222: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.224: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.229: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.234: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.237: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.239: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.242: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.245: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.248: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.251: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:14.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:19.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.204: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.207: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.210: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.213: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.217: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.223: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.225: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.228: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.239: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.243: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.246: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.249: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.251: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.254: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:19.254: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:24.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.202: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.205: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.208: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.211: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.214: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.219: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.222: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.225: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.234: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.236: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.239: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.242: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.244: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.247: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.250: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:24.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:29.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.201: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.207: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.210: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.213: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.215: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.218: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.221: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.223: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.232: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.234: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.237: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.240: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.244: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.247: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.250: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:29.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:34.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.202: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.208: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.212: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.215: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.220: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.224: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.227: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.238: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.241: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.244: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.247: INFO: Unable to read jessie_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.250: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.253: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.256: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:34.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:39.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.209: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.213: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.216: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.222: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.225: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.229: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.251: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.253: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.256: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:39.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:44.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.222: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.226: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.229: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.232: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.237: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.240: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.244: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.266: INFO: Unable to read jessie_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.269: INFO: Unable to read 10.135.9.50_udp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.275: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:44.275: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local jessie_tcp@PodARecord 10.135.9.50_udp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:55:49.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.210: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.214: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.218: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.228: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.231: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.258: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:49.258: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:55:54.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.207: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.210: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.214: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.217: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.226: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.229: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.259: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:54.259: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-494.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:55:59.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.211: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.214: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.223: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.226: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.253: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:55:59.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:56:04.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.217: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.220: INFO: Unable to read wheezy_udp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.223: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.229: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.232: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.259: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:04.259: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-494.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:56:09.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.228: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.232: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.265: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:09.265: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:56:14.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.222: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.225: INFO: Unable to read jessie_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.254: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:14.255: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR jessie_udp@dns-test-service.dns-494.svc.cluster.local 10.135.9.50_tcp@PTR] Jan 31 14:56:19.202: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:19.205: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:19.227: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:19.233: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:19.265: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:19.265: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:24.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:24.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:24.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:24.223: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:24.252: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:24.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:29.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:29.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:29.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:29.226: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:29.255: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:29.255: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:34.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:34.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:34.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:34.228: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:34.261: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:34.261: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:39.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:39.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:39.215: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:39.222: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:39.250: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:39.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:44.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:44.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:44.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:44.228: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:44.260: INFO: Unable to read 10.135.9.50_tcp@PTR from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:44.260: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_udp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord 10.135.9.50_tcp@PTR 10.135.9.50_tcp@PTR] Jan 31 14:56:49.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:49.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:49.254: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:56:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:54.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:54.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:56:59.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:59.230: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:56:59.269: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:04.225: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:04.263: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:09.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:09.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:09.265: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:14.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:14.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:14.261: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:19.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:19.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:19.267: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:24.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:24.226: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:24.264: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:29.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:29.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:29.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:34.204: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:34.227: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:34.273: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:39.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:39.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:39.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:44.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:44.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:44.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:49.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:49.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:49.264: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:54.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:54.226: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:54.269: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:57:59.203: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:59.243: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:57:59.282: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:04.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:04.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:04.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:09.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:09.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:09.271: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:14.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:14.214: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:14.246: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:19.204: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:19.227: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:19.262: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:24.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:24.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:24.247: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:29.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:29.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:29.254: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:34.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:34.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:34.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:39.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:39.262: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:44.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:44.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:44.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:49.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:49.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:49.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:54.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:54.274: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:58:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:59.224: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:58:59.258: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:04.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:04.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:04.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:09.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:09.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:09.265: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:14.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:14.231: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:14.271: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:19.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:19.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:19.262: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:24.213: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:24.249: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:24.288: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:29.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:29.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:29.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:34.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:34.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:39.227: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:39.263: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:44.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:44.215: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:44.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:49.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:49.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:49.260: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:54.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:54.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:54.254: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 14:59:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:59.229: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 14:59:59.277: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:04.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:04.255: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:09.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:09.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:09.254: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:14.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:14.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:14.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:19.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:19.224: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:19.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:24.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:24.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:24.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:29.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:29.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:29.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:34.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:34.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:39.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:39.258: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:44.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:44.224: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:44.266: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:49.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:49.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:49.262: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:54.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:54.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:00:59.203: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:59.230: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:00:59.276: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:04.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:04.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:04.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:09.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:09.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:09.297: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:14.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:14.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:14.299: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:19.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:19.223: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:19.263: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:24.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:24.215: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:24.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:29.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:29.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:29.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:34.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:34.250: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:39.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:39.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:44.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:44.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:44.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:49.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:49.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:49.255: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:54.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:54.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:54.249: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:01:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:59.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:01:59.264: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:04.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:04.265: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:09.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:09.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:09.261: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:14.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:14.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:14.260: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:19.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:19.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:19.267: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:24.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:24.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:24.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:29.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:29.214: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:29.247: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:34.204: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:34.225: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:34.274: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:39.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:39.258: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:44.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:44.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:44.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:49.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:49.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:49.248: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:54.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:54.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:02:59.204: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:59.224: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:02:59.267: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:04.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:04.263: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:09.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:09.215: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:09.252: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:14.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:14.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:14.263: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:19.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:19.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:19.259: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:24.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:24.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:24.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:29.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:29.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:29.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:34.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:34.256: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:39.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:39.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:39.261: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:44.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:44.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:44.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:49.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:49.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:49.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:54.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:54.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:54.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:03:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:59.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:03:59.258: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:04.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:04.269: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:09.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:09.216: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:09.251: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:14.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:14.230: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:14.291: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:19.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:19.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:19.259: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:24.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:24.217: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:24.264: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:29.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:29.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:29.268: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:34.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:34.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:34.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:39.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:39.214: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:39.248: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:44.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:44.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:44.255: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:49.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:49.222: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:49.259: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:54.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:54.229: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:54.273: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:04:59.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:59.219: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:04:59.253: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:05:04.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:04.220: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:04.257: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:05:09.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:09.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:09.302: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:05:09.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-494.svc.cluster.local from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:09.355: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570: the server could not find the requested resource (get pods dns-test-f7a20698-c97e-41c8-baea-379e00815570) Jan 31 15:05:09.444: INFO: Lookups using dns-494/dns-test-f7a20698-c97e-41c8-baea-379e00815570 failed for: [wheezy_tcp@dns-test-service.dns-494.svc.cluster.local wheezy_tcp@PodARecord] Jan 31 15:05:09.445: FAIL: Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0018e6000, 0x14, 0x18, 0x4bfaebd, 0x7, 0xc000580400, 0x5416760, 0xc001664f20, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:539 +0x18a k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:533 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000ed7b80, 0xc000580400, 0xc0018e6000, 0x14, 0x18) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:596 +0x34e k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:181 +0xea5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00345a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc00345a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc00345a480, 0x4df04f8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 31 15:05:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-494" for this suite. �[91m�[1m• Failure [602.538 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597�[0m �[91mJan 31 15:05:09.445: Unexpected error: <*errors.errorString | 0xc0001f6200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:539 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":288,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 31 15:05:08.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 �[1mSTEP�[0m: creating service in namespace services-1374 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-1374 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-1374 I0131 15:05:08.797988 23 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1374, replica count: 3 I0131 15:05:11.849443 23 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 15:05:11.855: INFO: Creating new exec pod Jan 31 15:05:14.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1374 exec execpod-affinitykjm5b -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 15:05:15.043: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 31 15:05:15.043: INFO: stdout: "" Jan 31 15:05:15.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1374 exec execpod-affinitykjm5b -- /bin/sh -x -c nc -zv -t -w 2 10.140.105.24 80' Jan 31 15:05:15.223: INFO: stderr: "+ nc -zv -t -w 2 10.140.105.24 80\nConnection to 10.140.105.24 80 port [tcp/http] succeeded!\n" Jan 31 15:05:15.223: INFO: stdout: "" Jan 31 15:05:15.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1374 exec execpod-affinitykjm5b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.140.105.24:80/ ; don